Feb 17 17:44:29 crc systemd[1]: Starting Kubernetes Kubelet... Feb 17 17:44:29 crc restorecon[4587]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 17:44:29 crc restorecon[4587]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 17:44:30 crc restorecon[4587]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 17:44:30 crc restorecon[4587]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 17 17:44:30 crc kubenswrapper[4680]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 17:44:30 crc kubenswrapper[4680]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 17 17:44:30 crc kubenswrapper[4680]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 17:44:30 crc kubenswrapper[4680]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 17:44:30 crc kubenswrapper[4680]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 17 17:44:30 crc kubenswrapper[4680]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.876573 4680 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880427 4680 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880452 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880460 4680 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880468 4680 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880475 4680 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880482 4680 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880490 4680 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880496 4680 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880507 4680 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880513 4680 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880519 4680 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880525 4680 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880531 4680 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880537 4680 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880543 4680 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880549 4680 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880554 4680 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880560 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880565 4680 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880570 4680 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880576 4680 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880581 4680 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880586 4680 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880591 4680 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880596 4680 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880602 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880608 4680 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880615 4680 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880623 4680 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880629 4680 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880634 4680 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880640 4680 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880645 4680 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880651 4680 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880656 4680 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880661 4680 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880668 4680 feature_gate.go:330] unrecognized feature gate: Example Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880673 4680 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880678 4680 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880683 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880689 4680 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880694 4680 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880699 4680 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880705 4680 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880710 4680 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880715 4680 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880720 4680 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880725 4680 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880733 4680 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880739 4680 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880745 4680 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880750 4680 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880755 4680 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880761 4680 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880766 4680 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880771 4680 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880776 4680 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880781 4680 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880786 4680 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880791 4680 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880797 4680 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880805 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880810 4680 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880815 4680 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880820 4680 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880825 4680 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880830 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880835 4680 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880840 4680 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880847 4680 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.880852 4680 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882231 4680 flags.go:64] FLAG: --address="0.0.0.0" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882254 4680 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882268 4680 flags.go:64] FLAG: --anonymous-auth="true" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882277 4680 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882286 4680 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882293 4680 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882302 4680 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882309 4680 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882334 4680 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882341 4680 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882347 4680 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882354 4680 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882360 4680 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882366 4680 flags.go:64] FLAG: --cgroup-root="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882373 4680 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882379 4680 flags.go:64] FLAG: --client-ca-file="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882384 4680 flags.go:64] FLAG: --cloud-config="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882390 4680 flags.go:64] FLAG: --cloud-provider="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882396 4680 flags.go:64] FLAG: --cluster-dns="[]" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882403 4680 flags.go:64] FLAG: --cluster-domain="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882411 4680 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882418 4680 flags.go:64] FLAG: --config-dir="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882432 4680 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882445 4680 flags.go:64] FLAG: --container-log-max-files="5" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882457 4680 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882463 4680 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882470 4680 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882476 4680 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882482 4680 flags.go:64] FLAG: --contention-profiling="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882489 4680 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882495 4680 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882502 4680 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882509 4680 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882523 4680 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882566 4680 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882575 4680 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882581 4680 flags.go:64] FLAG: --enable-load-reader="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882587 4680 flags.go:64] FLAG: --enable-server="true" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882593 4680 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882602 4680 flags.go:64] FLAG: --event-burst="100" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882610 4680 flags.go:64] FLAG: --event-qps="50" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882617 4680 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882623 4680 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882629 4680 flags.go:64] FLAG: --eviction-hard="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882636 4680 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882642 4680 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882649 4680 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882655 4680 flags.go:64] FLAG: --eviction-soft="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882661 4680 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882667 4680 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882673 4680 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882679 4680 flags.go:64] FLAG: --experimental-mounter-path="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882685 4680 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882691 4680 flags.go:64] FLAG: --fail-swap-on="true" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882697 4680 flags.go:64] FLAG: --feature-gates="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882705 4680 flags.go:64] FLAG: --file-check-frequency="20s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882711 4680 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882718 4680 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882724 4680 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882730 4680 flags.go:64] FLAG: --healthz-port="10248" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882736 4680 flags.go:64] FLAG: --help="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882743 4680 flags.go:64] FLAG: --hostname-override="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882750 4680 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882756 4680 flags.go:64] FLAG: --http-check-frequency="20s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882762 4680 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882768 4680 flags.go:64] FLAG: --image-credential-provider-config="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882774 4680 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882781 4680 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882787 4680 flags.go:64] FLAG: --image-service-endpoint="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882793 4680 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882799 4680 flags.go:64] FLAG: --kube-api-burst="100" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882805 4680 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882812 4680 flags.go:64] FLAG: --kube-api-qps="50" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882818 4680 flags.go:64] FLAG: --kube-reserved="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882824 4680 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882829 4680 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882836 4680 flags.go:64] FLAG: --kubelet-cgroups="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882842 4680 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882848 4680 flags.go:64] FLAG: --lock-file="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882853 4680 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882859 4680 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882865 4680 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882876 4680 flags.go:64] FLAG: --log-json-split-stream="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882882 4680 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882888 4680 flags.go:64] FLAG: --log-text-split-stream="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882894 4680 flags.go:64] FLAG: --logging-format="text" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882901 4680 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882908 4680 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882914 4680 flags.go:64] FLAG: --manifest-url="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882920 4680 flags.go:64] FLAG: --manifest-url-header="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882929 4680 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882935 4680 flags.go:64] FLAG: --max-open-files="1000000" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882943 4680 flags.go:64] FLAG: --max-pods="110" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882950 4680 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882957 4680 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882963 4680 flags.go:64] FLAG: --memory-manager-policy="None" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882969 4680 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882975 4680 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882981 4680 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.882987 4680 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883001 4680 flags.go:64] FLAG: --node-status-max-images="50" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883007 4680 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883014 4680 flags.go:64] FLAG: --oom-score-adj="-999" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883020 4680 flags.go:64] FLAG: --pod-cidr="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883027 4680 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883037 4680 flags.go:64] FLAG: --pod-manifest-path="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883043 4680 flags.go:64] FLAG: --pod-max-pids="-1" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883050 4680 flags.go:64] FLAG: --pods-per-core="0" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883056 4680 flags.go:64] FLAG: --port="10250" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883062 4680 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883068 4680 flags.go:64] FLAG: --provider-id="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883074 4680 flags.go:64] FLAG: --qos-reserved="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883080 4680 flags.go:64] FLAG: --read-only-port="10255" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883086 4680 flags.go:64] FLAG: --register-node="true" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883092 4680 flags.go:64] FLAG: --register-schedulable="true" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883098 4680 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883109 4680 flags.go:64] FLAG: --registry-burst="10" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883115 4680 flags.go:64] FLAG: --registry-qps="5" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883121 4680 flags.go:64] FLAG: --reserved-cpus="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883127 4680 flags.go:64] FLAG: --reserved-memory="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883134 4680 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883140 4680 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883147 4680 flags.go:64] FLAG: --rotate-certificates="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883153 4680 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883159 4680 flags.go:64] FLAG: --runonce="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883165 4680 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883172 4680 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883178 4680 flags.go:64] FLAG: --seccomp-default="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883185 4680 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883196 4680 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883211 4680 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883219 4680 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883227 4680 flags.go:64] FLAG: --storage-driver-password="root" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883234 4680 flags.go:64] FLAG: --storage-driver-secure="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883240 4680 flags.go:64] FLAG: --storage-driver-table="stats" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883246 4680 flags.go:64] FLAG: --storage-driver-user="root" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883252 4680 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883259 4680 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883265 4680 flags.go:64] FLAG: --system-cgroups="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883271 4680 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883283 4680 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883289 4680 flags.go:64] FLAG: --tls-cert-file="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883295 4680 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883302 4680 flags.go:64] FLAG: --tls-min-version="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883309 4680 flags.go:64] FLAG: --tls-private-key-file="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883338 4680 flags.go:64] FLAG: --topology-manager-policy="none" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883344 4680 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883350 4680 flags.go:64] FLAG: --topology-manager-scope="container" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883357 4680 flags.go:64] FLAG: --v="2" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883367 4680 flags.go:64] FLAG: --version="false" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883376 4680 flags.go:64] FLAG: --vmodule="" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883384 4680 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.883391 4680 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883587 4680 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883601 4680 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883607 4680 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883613 4680 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883618 4680 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883624 4680 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883630 4680 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883635 4680 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883640 4680 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883645 4680 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883651 4680 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883656 4680 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883663 4680 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883669 4680 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883677 4680 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883686 4680 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883701 4680 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883711 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883719 4680 feature_gate.go:330] unrecognized feature gate: Example Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883726 4680 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883732 4680 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883739 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883746 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883755 4680 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883765 4680 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883772 4680 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883778 4680 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883784 4680 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883792 4680 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883798 4680 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883805 4680 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883810 4680 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883817 4680 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883822 4680 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883828 4680 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883834 4680 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883839 4680 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883845 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883850 4680 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883856 4680 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883861 4680 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883866 4680 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883871 4680 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883877 4680 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883883 4680 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883888 4680 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883893 4680 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883899 4680 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883904 4680 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883910 4680 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883917 4680 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883922 4680 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883928 4680 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883933 4680 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883938 4680 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883943 4680 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883948 4680 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883953 4680 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883958 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883965 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883970 4680 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883975 4680 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883980 4680 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883985 4680 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883991 4680 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.883996 4680 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.884002 4680 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.884008 4680 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.884013 4680 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.884021 4680 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.884026 4680 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.884036 4680 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.897120 4680 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.897150 4680 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897208 4680 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897215 4680 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897219 4680 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897224 4680 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897229 4680 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897236 4680 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897240 4680 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897245 4680 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897249 4680 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897252 4680 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897257 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897260 4680 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897264 4680 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897267 4680 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897271 4680 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897275 4680 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897278 4680 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897282 4680 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897286 4680 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897289 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897294 4680 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897298 4680 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897301 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897306 4680 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897311 4680 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897334 4680 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897338 4680 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897341 4680 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897345 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897350 4680 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897354 4680 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897358 4680 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897361 4680 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897365 4680 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897370 4680 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897373 4680 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897377 4680 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897380 4680 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897384 4680 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897387 4680 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897391 4680 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897394 4680 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897398 4680 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897401 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897406 4680 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897410 4680 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897413 4680 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897417 4680 feature_gate.go:330] unrecognized feature gate: Example Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897420 4680 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897424 4680 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897427 4680 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897431 4680 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897434 4680 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897437 4680 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897441 4680 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897444 4680 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897448 4680 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897451 4680 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897455 4680 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897458 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897462 4680 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897465 4680 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897469 4680 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897472 4680 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897476 4680 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897480 4680 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897484 4680 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897487 4680 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897491 4680 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897496 4680 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897502 4680 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.897508 4680 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897623 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897633 4680 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897638 4680 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897642 4680 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897647 4680 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897651 4680 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897655 4680 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897660 4680 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897664 4680 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897668 4680 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897673 4680 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897677 4680 feature_gate.go:330] unrecognized feature gate: Example Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897682 4680 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897686 4680 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897691 4680 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897695 4680 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897700 4680 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897704 4680 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897707 4680 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897711 4680 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897714 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897719 4680 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897723 4680 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897727 4680 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897731 4680 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897735 4680 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897738 4680 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897742 4680 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897747 4680 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897752 4680 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897757 4680 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897761 4680 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897765 4680 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897768 4680 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897772 4680 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897776 4680 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897780 4680 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897783 4680 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897787 4680 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897790 4680 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897794 4680 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897798 4680 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897801 4680 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897805 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897808 4680 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897812 4680 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897816 4680 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897821 4680 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897825 4680 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897828 4680 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897832 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897835 4680 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897839 4680 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897843 4680 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897848 4680 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897852 4680 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897857 4680 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897861 4680 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897864 4680 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897869 4680 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897872 4680 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897877 4680 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897880 4680 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897884 4680 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897887 4680 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897891 4680 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897894 4680 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897898 4680 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897901 4680 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897905 4680 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 17:44:30 crc kubenswrapper[4680]: W0217 17:44:30.897909 4680 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.897914 4680 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.898062 4680 server.go:940] "Client rotation is on, will bootstrap in background" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.902660 4680 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.902733 4680 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.903867 4680 server.go:997] "Starting client certificate rotation" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.903886 4680 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.904198 4680 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-29 17:19:06.554225934 +0000 UTC Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.905596 4680 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.932085 4680 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.934368 4680 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 17:44:30 crc kubenswrapper[4680]: E0217 17:44:30.935874 4680 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.948384 4680 log.go:25] "Validated CRI v1 runtime API" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.983681 4680 log.go:25] "Validated CRI v1 image API" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.985162 4680 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.991606 4680 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-17-17-38-44-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 17 17:44:30 crc kubenswrapper[4680]: I0217 17:44:30.991659 4680 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.009006 4680 manager.go:217] Machine: {Timestamp:2026-02-17 17:44:31.006765824 +0000 UTC m=+0.557664503 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3f55a8ca-57de-4e9a-afc9-858ba7e46832 BootID:03bcc060-8056-40d5-9505-0ca683e24302 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:1e:a0:ce Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:1e:a0:ce Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:cd:05:72 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:bc:04:60 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e4:ae:cc Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a0:f6:24 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:3a:4b:d9:cc:0c:0e Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:aa:05:df:c5:66:df Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.009211 4680 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.009365 4680 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.010886 4680 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.011065 4680 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.011094 4680 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.011277 4680 topology_manager.go:138] "Creating topology manager with none policy" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.011288 4680 container_manager_linux.go:303] "Creating device plugin manager" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.011896 4680 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.011921 4680 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.012068 4680 state_mem.go:36] "Initialized new in-memory state store" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.012439 4680 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.016098 4680 kubelet.go:418] "Attempting to sync node with API server" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.016122 4680 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.016146 4680 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.016172 4680 kubelet.go:324] "Adding apiserver pod source" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.016188 4680 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 17 17:44:31 crc kubenswrapper[4680]: W0217 17:44:31.021379 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.021488 4680 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 17 17:44:31 crc kubenswrapper[4680]: E0217 17:44:31.021485 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Feb 17 17:44:31 crc kubenswrapper[4680]: W0217 17:44:31.021712 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 17 17:44:31 crc kubenswrapper[4680]: E0217 17:44:31.021761 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.023633 4680 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.025675 4680 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.027409 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.027439 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.027449 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.027458 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.027476 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.027488 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.027499 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.027518 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.027532 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.027546 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.027568 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.027580 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.031133 4680 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.031726 4680 server.go:1280] "Started kubelet" Feb 17 17:44:31 crc systemd[1]: Started Kubernetes Kubelet. Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.033898 4680 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.034020 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.034076 4680 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.034097 4680 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.034664 4680 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.034686 4680 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.034697 4680 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.034767 4680 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 17 17:44:31 crc kubenswrapper[4680]: E0217 17:44:31.035116 4680 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 17:44:31 crc kubenswrapper[4680]: W0217 17:44:31.035203 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 17 17:44:31 crc kubenswrapper[4680]: E0217 17:44:31.035278 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.035341 4680 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 17 17:44:31 crc kubenswrapper[4680]: E0217 17:44:31.036796 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="200ms" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.035374 4680 factory.go:55] Registering systemd factory Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.035860 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 20:54:43.111399958 +0000 UTC Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.040597 4680 server.go:460] "Adding debug handlers to kubelet server" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.042482 4680 factory.go:221] Registration of the systemd container factory successfully Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.043539 4680 factory.go:153] Registering CRI-O factory Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.043557 4680 factory.go:221] Registration of the crio container factory successfully Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.043613 4680 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.043637 4680 factory.go:103] Registering Raw factory Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.043651 4680 manager.go:1196] Started watching for new ooms in manager Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.044193 4680 manager.go:319] Starting recovery of all containers Feb 17 17:44:31 crc kubenswrapper[4680]: E0217 17:44:31.046104 4680 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.180:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189519afbda2c936 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 17:44:31.031691574 +0000 UTC m=+0.582590263,LastTimestamp:2026-02-17 17:44:31.031691574 +0000 UTC m=+0.582590263,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057241 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057298 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057338 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057354 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057367 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057385 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057398 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057411 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057427 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057438 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057451 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057472 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057485 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057499 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057511 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057530 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057544 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057562 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057575 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057707 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057745 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057758 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057770 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057782 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057797 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057812 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057828 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057842 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057853 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057866 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057877 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057889 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057907 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057922 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057942 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057954 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057967 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057978 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.057990 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058003 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058015 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058028 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058041 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058056 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058068 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058079 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058091 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058102 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058115 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058127 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058145 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058161 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058178 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058192 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058206 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058218 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058231 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058244 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058256 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058269 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058281 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058312 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058346 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058359 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058372 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058385 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058396 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058410 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058422 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058435 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058448 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058461 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058474 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058486 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058499 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058514 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058526 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058568 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058580 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058593 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058606 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058618 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058632 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058643 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058736 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058752 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058769 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058786 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058801 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058813 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058825 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058838 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058853 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058866 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058877 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058890 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058904 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058919 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058930 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058944 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058956 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058968 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058980 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.058991 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059008 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059023 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059037 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059052 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059065 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059078 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059090 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059104 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059116 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059129 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059142 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059153 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059164 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059175 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059187 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059199 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059212 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059225 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059237 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059250 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059276 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059291 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059303 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059336 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059349 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059362 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059750 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059777 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059795 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059809 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059823 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059837 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059849 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059867 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059879 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059892 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059905 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059918 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059930 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059941 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059953 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059966 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059978 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.059990 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060002 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060015 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060028 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060041 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060053 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060066 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060078 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060091 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060104 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060115 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060130 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060158 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060182 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060201 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060216 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060233 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060251 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060263 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060281 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060293 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060305 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060339 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060355 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060369 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060382 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060394 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060406 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060418 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060430 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060441 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060454 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060466 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.060478 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065121 4680 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065184 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065207 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065228 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065246 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065260 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065276 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065291 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065309 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065384 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065397 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065409 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065426 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065440 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065450 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065460 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065478 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065489 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065500 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065512 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065523 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065533 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065549 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065560 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065570 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065585 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065596 4680 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065605 4680 reconstruct.go:97] "Volume reconstruction finished" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.065614 4680 reconciler.go:26] "Reconciler: start to sync state" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.070577 4680 manager.go:324] Recovery completed Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.079662 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.081256 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.081296 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.081307 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.082122 4680 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.082135 4680 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.082155 4680 state_mem.go:36] "Initialized new in-memory state store" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.096525 4680 policy_none.go:49] "None policy: Start" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.097617 4680 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.097695 4680 state_mem.go:35] "Initializing new in-memory state store" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.100800 4680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.103923 4680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.103985 4680 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.104009 4680 kubelet.go:2335] "Starting kubelet main sync loop" Feb 17 17:44:31 crc kubenswrapper[4680]: E0217 17:44:31.104073 4680 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 17 17:44:31 crc kubenswrapper[4680]: W0217 17:44:31.106299 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 17 17:44:31 crc kubenswrapper[4680]: E0217 17:44:31.106417 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Feb 17 17:44:31 crc kubenswrapper[4680]: E0217 17:44:31.136094 4680 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.152433 4680 manager.go:334] "Starting Device Plugin manager" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.152565 4680 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.152577 4680 server.go:79] "Starting device plugin registration server" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.152959 4680 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.152985 4680 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.153196 4680 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.153368 4680 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.153384 4680 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 17 17:44:31 crc kubenswrapper[4680]: E0217 17:44:31.159744 4680 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.204851 4680 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.204956 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.205715 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.205814 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.205908 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.206144 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.206363 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.206415 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.207336 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.207360 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.207370 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.207661 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.207677 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.207685 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.207774 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.207900 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.207963 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.208331 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.208412 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.208466 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.208587 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.208702 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.208721 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.208730 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.208839 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.208893 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.209576 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.209611 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.209626 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.209779 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.209850 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.209882 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.210061 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.210149 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.210203 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.210496 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.210632 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.210658 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.210671 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.210639 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.210797 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.210972 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.211000 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.211634 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.211757 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.211826 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:31 crc kubenswrapper[4680]: E0217 17:44:31.237843 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="400ms" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.254046 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.255227 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.255252 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.255263 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.255288 4680 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 17:44:31 crc kubenswrapper[4680]: E0217 17:44:31.256299 4680 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.267592 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.267624 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.267641 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.267655 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.267681 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.267696 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.267748 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.267784 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.267850 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.267867 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.267882 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.267895 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.267910 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.267925 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.267939 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369200 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369478 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369493 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369616 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369654 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369677 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369627 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369719 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369736 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369698 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369776 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369816 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369832 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369873 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369898 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369919 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369937 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369953 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369969 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369983 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.369998 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.370143 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.370224 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.370278 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.370346 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.370381 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.370419 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.370454 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.370267 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.370254 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.457274 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.458494 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.458519 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.458527 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.458565 4680 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 17:44:31 crc kubenswrapper[4680]: E0217 17:44:31.458872 4680 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.545274 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.553940 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.566851 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.584596 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.589375 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 17:44:31 crc kubenswrapper[4680]: W0217 17:44:31.589930 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-c9e8e848e049882138770b8d968de51ea4f252f5d16509db6ea2798588f6ec2f WatchSource:0}: Error finding container c9e8e848e049882138770b8d968de51ea4f252f5d16509db6ea2798588f6ec2f: Status 404 returned error can't find the container with id c9e8e848e049882138770b8d968de51ea4f252f5d16509db6ea2798588f6ec2f Feb 17 17:44:31 crc kubenswrapper[4680]: W0217 17:44:31.591443 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-05e0339c6e12f6c7385719349d741791733aba24244e92b6c342651b8a159d68 WatchSource:0}: Error finding container 05e0339c6e12f6c7385719349d741791733aba24244e92b6c342651b8a159d68: Status 404 returned error can't find the container with id 05e0339c6e12f6c7385719349d741791733aba24244e92b6c342651b8a159d68 Feb 17 17:44:31 crc kubenswrapper[4680]: W0217 17:44:31.592124 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-6e1b6c5b97946e6a5a73dfc09d153c3fe481336b61fcdc97e28daa76461749d4 WatchSource:0}: Error finding container 6e1b6c5b97946e6a5a73dfc09d153c3fe481336b61fcdc97e28daa76461749d4: Status 404 returned error can't find the container with id 6e1b6c5b97946e6a5a73dfc09d153c3fe481336b61fcdc97e28daa76461749d4 Feb 17 17:44:31 crc kubenswrapper[4680]: E0217 17:44:31.639342 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="800ms" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.859033 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.860085 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.860123 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.860140 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:31 crc kubenswrapper[4680]: I0217 17:44:31.860162 4680 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 17:44:31 crc kubenswrapper[4680]: E0217 17:44:31.860654 4680 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.035235 4680 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.037275 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:30:55.479148539 +0000 UTC Feb 17 17:44:32 crc kubenswrapper[4680]: W0217 17:44:32.077785 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 17 17:44:32 crc kubenswrapper[4680]: E0217 17:44:32.077908 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.112171 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5513e935a50d945c75fad27aa2386c522a9235d11cfb9cb31b21873d8310afe7"} Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.113933 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84"} Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.113956 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"05e0339c6e12f6c7385719349d741791733aba24244e92b6c342651b8a159d68"} Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.114050 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.116688 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.116699 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2"} Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.116744 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6e1b6c5b97946e6a5a73dfc09d153c3fe481336b61fcdc97e28daa76461749d4"} Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.116714 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.116779 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.118161 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519"} Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.118183 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c9e8e848e049882138770b8d968de51ea4f252f5d16509db6ea2798588f6ec2f"} Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.118266 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.119473 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.119490 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.119498 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.120879 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4db136d0b8d182eb673364c7e9001fb892bc223507f0178c230ae465b26cdfdd"} Feb 17 17:44:32 crc kubenswrapper[4680]: W0217 17:44:32.246297 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 17 17:44:32 crc kubenswrapper[4680]: E0217 17:44:32.246392 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Feb 17 17:44:32 crc kubenswrapper[4680]: E0217 17:44:32.440867 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="1.6s" Feb 17 17:44:32 crc kubenswrapper[4680]: W0217 17:44:32.588921 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 17 17:44:32 crc kubenswrapper[4680]: E0217 17:44:32.588985 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Feb 17 17:44:32 crc kubenswrapper[4680]: W0217 17:44:32.641663 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 17 17:44:32 crc kubenswrapper[4680]: E0217 17:44:32.641741 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.661139 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.662977 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.663038 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.663058 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:32 crc kubenswrapper[4680]: I0217 17:44:32.663112 4680 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 17:44:32 crc kubenswrapper[4680]: E0217 17:44:32.663822 4680 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.035504 4680 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.037661 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:38:02.965104139 +0000 UTC Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.129392 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e"} Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.129665 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d"} Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.129814 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4"} Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.129728 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.130930 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.131028 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.131100 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.131624 4680 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519" exitCode=0 Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.131684 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519"} Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.131754 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.132379 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.132588 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.132688 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.133110 4680 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102" exitCode=0 Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.133197 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.133296 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102"} Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.133714 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.133731 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.133740 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.134637 4680 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5" exitCode=0 Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.134676 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5"} Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.134723 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.134851 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.135521 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.135552 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.135620 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.135664 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.135639 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.135754 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.136137 4680 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 17:44:33 crc kubenswrapper[4680]: E0217 17:44:33.137230 4680 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.137436 4680 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84" exitCode=0 Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.137466 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84"} Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.137538 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.138400 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.138416 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.138426 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:33 crc kubenswrapper[4680]: I0217 17:44:33.913066 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:44:33 crc kubenswrapper[4680]: W0217 17:44:33.972164 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 17 17:44:33 crc kubenswrapper[4680]: E0217 17:44:33.972248 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.035626 4680 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.038441 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 16:42:05.992359551 +0000 UTC Feb 17 17:44:34 crc kubenswrapper[4680]: E0217 17:44:34.042054 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="3.2s" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.143440 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170"} Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.143712 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa"} Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.143773 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc"} Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.143827 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22"} Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.143894 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b"} Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.144029 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.144913 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.144990 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.145043 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.146479 4680 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642" exitCode=0 Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.146585 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642"} Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.146770 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.147710 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.147757 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.147771 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.148984 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"df64de79d0a8a7383c4a9142c8c6949ca0c73de37c7d379879e869693520d2dd"} Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.149055 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.155824 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.155873 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.155894 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.158818 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.159499 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b337f7427aa447a1d32260cf850cfee8550ef1926c808b4462f07f3142c49a5f"} Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.159554 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"26de5ddc06e863eadae5ce325db8f0a2ba74088e8f27704d5a5029a4ab1f666d"} Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.159574 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b524e303b5baf95c0a7b7716975c1ab1755579270f3097d585466503c56a9c9d"} Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.159700 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.163719 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.163767 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.163784 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.164611 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.164642 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.164656 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.264849 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.266354 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.266413 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.266427 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:34 crc kubenswrapper[4680]: I0217 17:44:34.266461 4680 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 17:44:34 crc kubenswrapper[4680]: E0217 17:44:34.267068 4680 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Feb 17 17:44:34 crc kubenswrapper[4680]: W0217 17:44:34.384951 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 17 17:44:34 crc kubenswrapper[4680]: E0217 17:44:34.385028 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.039522 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:35:20.149589309 +0000 UTC Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.164023 4680 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55" exitCode=0 Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.164093 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55"} Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.164145 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.164423 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.164552 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.165389 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.165419 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.165430 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.165613 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.165650 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.165664 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.166305 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.166346 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.166355 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.729812 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.729957 4680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.729998 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.731128 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.731178 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:35 crc kubenswrapper[4680]: I0217 17:44:35.731190 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:36 crc kubenswrapper[4680]: I0217 17:44:36.040933 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:08:31.543859735 +0000 UTC Feb 17 17:44:36 crc kubenswrapper[4680]: I0217 17:44:36.173622 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303"} Feb 17 17:44:36 crc kubenswrapper[4680]: I0217 17:44:36.173690 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b"} Feb 17 17:44:36 crc kubenswrapper[4680]: I0217 17:44:36.173709 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62"} Feb 17 17:44:36 crc kubenswrapper[4680]: I0217 17:44:36.173728 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90"} Feb 17 17:44:36 crc kubenswrapper[4680]: I0217 17:44:36.173746 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee"} Feb 17 17:44:36 crc kubenswrapper[4680]: I0217 17:44:36.173900 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:36 crc kubenswrapper[4680]: I0217 17:44:36.175118 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:36 crc kubenswrapper[4680]: I0217 17:44:36.175155 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:36 crc kubenswrapper[4680]: I0217 17:44:36.175171 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.041201 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 07:27:12.258712735 +0000 UTC Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.176123 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.177176 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.177229 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.177243 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.360116 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.360303 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.361561 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.361685 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.361791 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.383790 4680 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.467619 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.468788 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.468823 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.468838 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.468865 4680 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.800838 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.800969 4680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.801006 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.801970 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.802007 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:37 crc kubenswrapper[4680]: I0217 17:44:37.802016 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:38 crc kubenswrapper[4680]: I0217 17:44:38.041438 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:02:03.968796302 +0000 UTC Feb 17 17:44:39 crc kubenswrapper[4680]: I0217 17:44:39.042382 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:38:54.853627652 +0000 UTC Feb 17 17:44:40 crc kubenswrapper[4680]: I0217 17:44:40.043096 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 05:50:46.816543379 +0000 UTC Feb 17 17:44:40 crc kubenswrapper[4680]: I0217 17:44:40.490366 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 17 17:44:40 crc kubenswrapper[4680]: I0217 17:44:40.490613 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:40 crc kubenswrapper[4680]: I0217 17:44:40.492000 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:40 crc kubenswrapper[4680]: I0217 17:44:40.492054 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:40 crc kubenswrapper[4680]: I0217 17:44:40.492066 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:41 crc kubenswrapper[4680]: I0217 17:44:41.043420 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:21:58.204285673 +0000 UTC Feb 17 17:44:41 crc kubenswrapper[4680]: I0217 17:44:41.070742 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:41 crc kubenswrapper[4680]: I0217 17:44:41.070942 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:41 crc kubenswrapper[4680]: I0217 17:44:41.072046 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:41 crc kubenswrapper[4680]: I0217 17:44:41.072075 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:41 crc kubenswrapper[4680]: I0217 17:44:41.072086 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:41 crc kubenswrapper[4680]: E0217 17:44:41.159857 4680 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 17:44:41 crc kubenswrapper[4680]: I0217 17:44:41.768925 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:44:41 crc kubenswrapper[4680]: I0217 17:44:41.769160 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:41 crc kubenswrapper[4680]: I0217 17:44:41.773917 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:41 crc kubenswrapper[4680]: I0217 17:44:41.773963 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:41 crc kubenswrapper[4680]: I0217 17:44:41.773974 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:41 crc kubenswrapper[4680]: I0217 17:44:41.990783 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:44:42 crc kubenswrapper[4680]: I0217 17:44:42.043960 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:06:13.552160183 +0000 UTC Feb 17 17:44:42 crc kubenswrapper[4680]: I0217 17:44:42.188004 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:42 crc kubenswrapper[4680]: I0217 17:44:42.188690 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:42 crc kubenswrapper[4680]: I0217 17:44:42.188912 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:42 crc kubenswrapper[4680]: I0217 17:44:42.188987 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:42 crc kubenswrapper[4680]: I0217 17:44:42.191918 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:44:43 crc kubenswrapper[4680]: I0217 17:44:43.044809 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:56:30.170048358 +0000 UTC Feb 17 17:44:43 crc kubenswrapper[4680]: I0217 17:44:43.102171 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:44:43 crc kubenswrapper[4680]: I0217 17:44:43.189947 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:43 crc kubenswrapper[4680]: I0217 17:44:43.190998 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:43 crc kubenswrapper[4680]: I0217 17:44:43.191056 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:43 crc kubenswrapper[4680]: I0217 17:44:43.191075 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:43 crc kubenswrapper[4680]: I0217 17:44:43.553061 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 17:44:43 crc kubenswrapper[4680]: I0217 17:44:43.553289 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:43 crc kubenswrapper[4680]: I0217 17:44:43.554398 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:43 crc kubenswrapper[4680]: I0217 17:44:43.554443 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:43 crc kubenswrapper[4680]: I0217 17:44:43.554452 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:44 crc kubenswrapper[4680]: I0217 17:44:44.045890 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:38:34.732500028 +0000 UTC Feb 17 17:44:44 crc kubenswrapper[4680]: I0217 17:44:44.064229 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 17 17:44:44 crc kubenswrapper[4680]: I0217 17:44:44.064508 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:44 crc kubenswrapper[4680]: I0217 17:44:44.065746 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:44 crc kubenswrapper[4680]: I0217 17:44:44.065799 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:44 crc kubenswrapper[4680]: I0217 17:44:44.065819 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:44 crc kubenswrapper[4680]: I0217 17:44:44.191787 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:44 crc kubenswrapper[4680]: I0217 17:44:44.192530 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:44 crc kubenswrapper[4680]: I0217 17:44:44.192566 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:44 crc kubenswrapper[4680]: I0217 17:44:44.192579 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:44 crc kubenswrapper[4680]: W0217 17:44:44.838032 4680 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 17 17:44:44 crc kubenswrapper[4680]: I0217 17:44:44.838352 4680 trace.go:236] Trace[861013285]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 17:44:34.837) (total time: 10001ms): Feb 17 17:44:44 crc kubenswrapper[4680]: Trace[861013285]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (17:44:44.838) Feb 17 17:44:44 crc kubenswrapper[4680]: Trace[861013285]: [10.001214427s] [10.001214427s] END Feb 17 17:44:44 crc kubenswrapper[4680]: E0217 17:44:44.838480 4680 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 17 17:44:45 crc kubenswrapper[4680]: I0217 17:44:45.035651 4680 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 17 17:44:45 crc kubenswrapper[4680]: I0217 17:44:45.046920 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 12:17:24.637823833 +0000 UTC Feb 17 17:44:45 crc kubenswrapper[4680]: I0217 17:44:45.153059 4680 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 17:44:45 crc kubenswrapper[4680]: I0217 17:44:45.153134 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 17:44:45 crc kubenswrapper[4680]: I0217 17:44:45.157258 4680 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 17:44:45 crc kubenswrapper[4680]: I0217 17:44:45.157338 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 17:44:45 crc kubenswrapper[4680]: I0217 17:44:45.195058 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 17:44:45 crc kubenswrapper[4680]: I0217 17:44:45.196453 4680 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170" exitCode=255 Feb 17 17:44:45 crc kubenswrapper[4680]: I0217 17:44:45.196508 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170"} Feb 17 17:44:45 crc kubenswrapper[4680]: I0217 17:44:45.196673 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:45 crc kubenswrapper[4680]: I0217 17:44:45.197571 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:45 crc kubenswrapper[4680]: I0217 17:44:45.197604 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:45 crc kubenswrapper[4680]: I0217 17:44:45.197613 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:45 crc kubenswrapper[4680]: I0217 17:44:45.198118 4680 scope.go:117] "RemoveContainer" containerID="eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170" Feb 17 17:44:45 crc kubenswrapper[4680]: I0217 17:44:45.734399 4680 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]log ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]etcd ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/priority-and-fairness-filter ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/start-apiextensions-informers ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/start-apiextensions-controllers ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/crd-informer-synced ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/start-system-namespaces-controller ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 17 17:44:45 crc kubenswrapper[4680]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 17 17:44:45 crc kubenswrapper[4680]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/bootstrap-controller ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/start-kube-aggregator-informers ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/apiservice-registration-controller ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/apiservice-discovery-controller ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]autoregister-completion ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/apiservice-openapi-controller ok Feb 17 17:44:45 crc kubenswrapper[4680]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 17 17:44:45 crc kubenswrapper[4680]: livez check failed Feb 17 17:44:45 crc kubenswrapper[4680]: I0217 17:44:45.734458 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:44:46 crc kubenswrapper[4680]: I0217 17:44:46.047685 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 01:26:39.084721954 +0000 UTC Feb 17 17:44:46 crc kubenswrapper[4680]: I0217 17:44:46.103017 4680 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 17:44:46 crc kubenswrapper[4680]: I0217 17:44:46.103110 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 17:44:46 crc kubenswrapper[4680]: I0217 17:44:46.200912 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 17:44:46 crc kubenswrapper[4680]: I0217 17:44:46.202561 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433"} Feb 17 17:44:46 crc kubenswrapper[4680]: I0217 17:44:46.202729 4680 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 17:44:46 crc kubenswrapper[4680]: I0217 17:44:46.203663 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:46 crc kubenswrapper[4680]: I0217 17:44:46.203895 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:46 crc kubenswrapper[4680]: I0217 17:44:46.204038 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:47 crc kubenswrapper[4680]: I0217 17:44:47.049614 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 04:32:19.005073455 +0000 UTC Feb 17 17:44:48 crc kubenswrapper[4680]: I0217 17:44:48.049897 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:51:58.332715273 +0000 UTC Feb 17 17:44:48 crc kubenswrapper[4680]: I0217 17:44:48.581043 4680 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.027378 4680 apiserver.go:52] "Watching apiserver" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.033232 4680 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.033508 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.034256 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:44:49 crc kubenswrapper[4680]: E0217 17:44:49.034356 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.034428 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.034503 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.034737 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.034877 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.034903 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:44:49 crc kubenswrapper[4680]: E0217 17:44:49.035243 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:44:49 crc kubenswrapper[4680]: E0217 17:44:49.035296 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.035804 4680 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.036521 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.036528 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.037592 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.037714 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.038685 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.038720 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.039880 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.041022 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.041129 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.050177 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:13:28.975980349 +0000 UTC Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.070101 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.079881 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.089844 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.101900 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.113808 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.122305 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.129478 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:49 crc kubenswrapper[4680]: I0217 17:44:49.138550 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.051125 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 07:20:37.701178134 +0000 UTC Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.154541 4680 trace.go:236] Trace[1746521995]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 17:44:39.500) (total time: 10653ms): Feb 17 17:44:50 crc kubenswrapper[4680]: Trace[1746521995]: ---"Objects listed" error: 10653ms (17:44:50.154) Feb 17 17:44:50 crc kubenswrapper[4680]: Trace[1746521995]: [10.653982981s] [10.653982981s] END Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.154880 4680 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.154729 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.156442 4680 trace.go:236] Trace[1009957967]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 17:44:35.185) (total time: 14971ms): Feb 17 17:44:50 crc kubenswrapper[4680]: Trace[1009957967]: ---"Objects listed" error: 14971ms (17:44:50.156) Feb 17 17:44:50 crc kubenswrapper[4680]: Trace[1009957967]: [14.971250372s] [14.971250372s] END Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.156463 4680 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.157111 4680 trace.go:236] Trace[100695807]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 17:44:38.437) (total time: 11719ms): Feb 17 17:44:50 crc kubenswrapper[4680]: Trace[100695807]: ---"Objects listed" error: 11719ms (17:44:50.156) Feb 17 17:44:50 crc kubenswrapper[4680]: Trace[100695807]: [11.719878209s] [11.719878209s] END Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.157134 4680 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.158413 4680 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.158553 4680 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.162918 4680 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.163193 4680 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.164518 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.164560 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.164577 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.164606 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.164619 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:50Z","lastTransitionTime":"2026-02-17T17:44:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.182452 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.185962 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.186007 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.186018 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.186040 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.186053 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:50Z","lastTransitionTime":"2026-02-17T17:44:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.187923 4680 csr.go:261] certificate signing request csr-9g9dr is approved, waiting to be issued Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.194587 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.199186 4680 csr.go:257] certificate signing request csr-9g9dr is issued Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.200379 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.200439 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.200478 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.200501 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.200513 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:50Z","lastTransitionTime":"2026-02-17T17:44:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.213831 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.216961 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.216989 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.216997 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.217014 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.217023 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:50Z","lastTransitionTime":"2026-02-17T17:44:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.226142 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.231685 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.231739 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.231748 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.231765 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.231775 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:50Z","lastTransitionTime":"2026-02-17T17:44:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.241661 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.241824 4680 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.243844 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.243887 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.243899 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.243925 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.243934 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:50Z","lastTransitionTime":"2026-02-17T17:44:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.255737 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-l5ckv"] Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.256086 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-l5ckv" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258194 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258585 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258623 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258645 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258667 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258690 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258711 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258732 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258751 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258769 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258791 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258815 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258839 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258863 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258884 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258907 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258929 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258952 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258958 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258978 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258981 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.258999 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259019 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259041 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259060 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259080 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259069 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259106 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259130 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259152 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259172 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259194 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259214 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259234 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259256 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259279 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259301 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259350 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259372 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259393 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259416 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259438 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259457 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259478 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259499 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259524 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259567 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259588 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259610 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259634 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259656 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259169 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259673 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259291 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259349 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259706 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259468 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259731 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259517 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259533 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259573 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259573 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259600 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259638 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259788 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259834 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259680 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259882 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259883 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259924 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259926 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259951 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.259956 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260007 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260024 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260046 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260063 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260082 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260095 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260122 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260147 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260165 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260184 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260209 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260224 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260239 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260245 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260255 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260274 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260292 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260327 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260354 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260381 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260410 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260420 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260434 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260462 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260486 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260510 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260539 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260563 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260588 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260614 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260639 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260663 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260685 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260702 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260718 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260734 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260751 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260777 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260793 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260809 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260829 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260845 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260860 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260875 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260890 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260907 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260954 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260973 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260990 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261006 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261026 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261050 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261067 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261087 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261104 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261119 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261137 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261153 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261170 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261184 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261199 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261214 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261240 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261255 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261272 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261288 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261303 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261332 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261349 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261366 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261382 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261396 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261414 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261431 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261446 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261464 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261479 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261496 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261512 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261526 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261543 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261560 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261574 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261591 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261608 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261624 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261640 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261654 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261670 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261695 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261721 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261750 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261777 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261803 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261819 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261836 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261852 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261867 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261884 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261902 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261918 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261937 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261952 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261969 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261985 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262004 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262021 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262037 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262053 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262071 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262087 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262105 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262123 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262140 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262156 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262172 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262187 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262204 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262225 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262243 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262260 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262278 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262294 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262311 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262634 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262652 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262669 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262685 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262702 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262718 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262736 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262751 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262767 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262784 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262800 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262825 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262848 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262875 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262901 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262925 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262941 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262959 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262975 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262990 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263007 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263023 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263039 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263055 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263070 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263087 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263123 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263145 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263167 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263187 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263207 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263227 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263289 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263309 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263342 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263361 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263381 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263403 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263452 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263471 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263523 4680 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263535 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263546 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263557 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263568 4680 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263578 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263587 4680 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263597 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263606 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263616 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263625 4680 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263635 4680 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263644 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263654 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263664 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263674 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263683 4680 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263694 4680 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263703 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263713 4680 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263722 4680 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263731 4680 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263741 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263753 4680 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263762 4680 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263771 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263781 4680 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260435 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260447 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260455 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260472 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260511 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260649 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260723 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260735 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260772 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260905 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.260950 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261299 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261524 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261655 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.261895 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262076 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262080 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262183 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.262205 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263340 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263350 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263362 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263381 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263378 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263632 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263700 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.263775 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.264067 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.264693 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.264745 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.264949 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.265016 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:44:50.764999651 +0000 UTC m=+20.315898320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.267516 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.267562 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.267706 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.267873 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.267887 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.268063 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.268296 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.268770 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.269098 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.269500 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.265061 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.265164 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.265355 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.265385 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.265596 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.265639 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.265673 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.265784 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.266247 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.266455 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.266635 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.266798 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.266867 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.266957 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.267065 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.269763 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.269870 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.270049 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.270180 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.270216 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.270521 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.270894 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.271333 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.271355 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.271662 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.276631 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.276940 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.277121 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.277435 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.277695 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.277991 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.278175 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.278415 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.278612 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.278785 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.279136 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.280879 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.281120 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.281222 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.281279 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.281154 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.281382 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.281442 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.281552 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.281949 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.282115 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.282373 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.282578 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.282781 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.284235 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.284368 4680 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.286033 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 17:44:50.785992546 +0000 UTC m=+20.336891295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.286153 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.286670 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.287162 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.287515 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.287813 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.288098 4680 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.288423 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.288990 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.290008 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.290021 4680 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.290073 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.290126 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 17:44:50.790104417 +0000 UTC m=+20.341003186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.290014 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.285586 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.285875 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.290342 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.290439 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.290786 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.290768 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.284884 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.291079 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.291138 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.291340 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.291656 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.292580 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.292781 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.296312 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.290870 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.296774 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.297443 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.298991 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.300032 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.300513 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.301656 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.301858 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.301882 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.302531 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.302594 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.302703 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.303034 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.303115 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.303302 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.303308 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.303519 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.303705 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.303752 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.303855 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.304257 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.285259 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.304399 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.304651 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.304723 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.307461 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.307494 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.307512 4680 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.307572 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 17:44:50.807554728 +0000 UTC m=+20.358453507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.309982 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.310012 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.310185 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.310242 4680 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.310380 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 17:44:50.810369397 +0000 UTC m=+20.361268186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.313618 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.315745 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.324456 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.324696 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.324854 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.325013 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.325773 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.327545 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.329803 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.331063 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.332348 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.334612 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.336898 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.337448 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.338389 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.338616 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.338728 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.338836 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.338941 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.338980 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.339061 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.339269 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.339359 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.339722 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.339818 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.339918 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.339967 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.340222 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.341140 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.344250 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.345608 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.345632 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.345641 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.345676 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.345686 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:50Z","lastTransitionTime":"2026-02-17T17:44:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.349576 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.350367 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.350465 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.350471 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.351486 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.353918 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.355570 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.356918 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364075 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364476 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364517 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364544 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/68c51268-d876-4559-a54d-46984c40743c-hosts-file\") pod \"node-resolver-l5ckv\" (UID: \"68c51268-d876-4559-a54d-46984c40743c\") " pod="openshift-dns/node-resolver-l5ckv" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364562 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqfxd\" (UniqueName: \"kubernetes.io/projected/68c51268-d876-4559-a54d-46984c40743c-kube-api-access-wqfxd\") pod \"node-resolver-l5ckv\" (UID: \"68c51268-d876-4559-a54d-46984c40743c\") " pod="openshift-dns/node-resolver-l5ckv" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364614 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364625 4680 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364634 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364642 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364651 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364660 4680 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364670 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364678 4680 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364686 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364694 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364702 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364710 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364718 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364726 4680 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364735 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364744 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364756 4680 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364767 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364795 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364805 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364813 4680 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364821 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364829 4680 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364836 4680 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364844 4680 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364852 4680 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364860 4680 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364869 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364877 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364886 4680 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364895 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364902 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364910 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364918 4680 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364926 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364950 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364959 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364967 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364976 4680 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364983 4680 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364991 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.364998 4680 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365008 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365016 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365024 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365032 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365040 4680 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365048 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365056 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365064 4680 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365072 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365079 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365087 4680 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365095 4680 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365103 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365117 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365126 4680 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365133 4680 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365140 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365148 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365156 4680 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365164 4680 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365173 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365181 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365189 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365196 4680 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365204 4680 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365214 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365222 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365230 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365238 4680 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365246 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365254 4680 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365262 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365270 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365279 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365286 4680 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365294 4680 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365303 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365311 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365339 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365348 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365356 4680 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365363 4680 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365371 4680 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365378 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365386 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365394 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365402 4680 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365409 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365418 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365426 4680 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365434 4680 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365442 4680 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365449 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365457 4680 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365465 4680 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365472 4680 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365481 4680 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365490 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365497 4680 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365505 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365514 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365523 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365530 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365538 4680 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365547 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365555 4680 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365563 4680 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365571 4680 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365579 4680 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365620 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365629 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365637 4680 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365645 4680 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365654 4680 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365662 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365671 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365679 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365688 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365695 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365703 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365711 4680 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365720 4680 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365729 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365737 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365745 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365752 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365760 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365768 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365777 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365785 4680 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365793 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365800 4680 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365808 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365820 4680 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365828 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365836 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365844 4680 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365852 4680 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365860 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365869 4680 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365877 4680 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365885 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365894 4680 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365902 4680 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365910 4680 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365918 4680 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365926 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365934 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365942 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365950 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365958 4680 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365966 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365975 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365983 4680 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365990 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.365999 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.366007 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.366014 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.366023 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.366031 4680 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.366039 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.366047 4680 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.366100 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.366211 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.366824 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.375691 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.378844 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.391996 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.448532 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.448580 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.448591 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.448612 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.448629 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:50Z","lastTransitionTime":"2026-02-17T17:44:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.466884 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/68c51268-d876-4559-a54d-46984c40743c-hosts-file\") pod \"node-resolver-l5ckv\" (UID: \"68c51268-d876-4559-a54d-46984c40743c\") " pod="openshift-dns/node-resolver-l5ckv" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.466923 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqfxd\" (UniqueName: \"kubernetes.io/projected/68c51268-d876-4559-a54d-46984c40743c-kube-api-access-wqfxd\") pod \"node-resolver-l5ckv\" (UID: \"68c51268-d876-4559-a54d-46984c40743c\") " pod="openshift-dns/node-resolver-l5ckv" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.466960 4680 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.466972 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.466982 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.467163 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/68c51268-d876-4559-a54d-46984c40743c-hosts-file\") pod \"node-resolver-l5ckv\" (UID: \"68c51268-d876-4559-a54d-46984c40743c\") " pod="openshift-dns/node-resolver-l5ckv" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.487091 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqfxd\" (UniqueName: \"kubernetes.io/projected/68c51268-d876-4559-a54d-46984c40743c-kube-api-access-wqfxd\") pod \"node-resolver-l5ckv\" (UID: \"68c51268-d876-4559-a54d-46984c40743c\") " pod="openshift-dns/node-resolver-l5ckv" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.550420 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.550469 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.550496 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.550513 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.550521 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:50Z","lastTransitionTime":"2026-02-17T17:44:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.550881 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.558074 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 17:44:50 crc kubenswrapper[4680]: W0217 17:44:50.564514 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-927f3dd3e267f9baf441182009f681be1bb02019ed2a3e0daffdb669617d4955 WatchSource:0}: Error finding container 927f3dd3e267f9baf441182009f681be1bb02019ed2a3e0daffdb669617d4955: Status 404 returned error can't find the container with id 927f3dd3e267f9baf441182009f681be1bb02019ed2a3e0daffdb669617d4955 Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.565900 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 17:44:50 crc kubenswrapper[4680]: W0217 17:44:50.568017 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-2a0521f0ced214b4708a685809a9a296f3683b2e9cec0ed7b4e42b63f35baa8a WatchSource:0}: Error finding container 2a0521f0ced214b4708a685809a9a296f3683b2e9cec0ed7b4e42b63f35baa8a: Status 404 returned error can't find the container with id 2a0521f0ced214b4708a685809a9a296f3683b2e9cec0ed7b4e42b63f35baa8a Feb 17 17:44:50 crc kubenswrapper[4680]: W0217 17:44:50.576006 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-0e21ab78a100fa7f3f532c5548be425f99e4f103c6696d19ef9146b16cdc2b9e WatchSource:0}: Error finding container 0e21ab78a100fa7f3f532c5548be425f99e4f103c6696d19ef9146b16cdc2b9e: Status 404 returned error can't find the container with id 0e21ab78a100fa7f3f532c5548be425f99e4f103c6696d19ef9146b16cdc2b9e Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.650639 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-l5ckv" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.654418 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.654443 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.654451 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.654467 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.654476 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:50Z","lastTransitionTime":"2026-02-17T17:44:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.738960 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.741612 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.746034 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.748593 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.758673 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.761975 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.762002 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.762011 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.762028 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.762037 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:50Z","lastTransitionTime":"2026-02-17T17:44:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.768775 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.768893 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:44:51.768879814 +0000 UTC m=+21.319778493 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.772569 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.780806 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.791838 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.801681 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.810909 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.825671 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.841207 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.853860 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.864333 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.864363 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.864372 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.864387 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.864395 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:50Z","lastTransitionTime":"2026-02-17T17:44:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.864996 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.869813 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.869843 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.869864 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.869883 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.869933 4680 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.869967 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 17:44:51.869955973 +0000 UTC m=+21.420854652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.870034 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.870045 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.870054 4680 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.870074 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 17:44:51.870067766 +0000 UTC m=+21.420966445 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.870113 4680 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.870131 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 17:44:51.870126358 +0000 UTC m=+21.421025037 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.870166 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.870174 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.870182 4680 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:50 crc kubenswrapper[4680]: E0217 17:44:50.870199 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 17:44:51.87019377 +0000 UTC m=+21.421092449 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.871862 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.883649 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.897219 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.903936 4680 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 17 17:44:50 crc kubenswrapper[4680]: W0217 17:44:50.904055 4680 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Feb 17 17:44:50 crc kubenswrapper[4680]: W0217 17:44:50.904566 4680 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.905125 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c/status\": read tcp 38.102.83.180:40364->38.102.83.180:6443: use of closed network connection" Feb 17 17:44:50 crc kubenswrapper[4680]: W0217 17:44:50.905278 4680 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 17:44:50 crc kubenswrapper[4680]: W0217 17:44:50.905441 4680 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 17 17:44:50 crc kubenswrapper[4680]: W0217 17:44:50.905467 4680 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 17:44:50 crc kubenswrapper[4680]: W0217 17:44:50.905872 4680 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.919618 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.966903 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.966937 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.966945 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.966982 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:50 crc kubenswrapper[4680]: I0217 17:44:50.966992 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:50Z","lastTransitionTime":"2026-02-17T17:44:50Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.051845 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 22:38:12.115325589 +0000 UTC Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.069409 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.069447 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.069457 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.069472 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.069484 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:51Z","lastTransitionTime":"2026-02-17T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.104872 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.104937 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.104983 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:44:51 crc kubenswrapper[4680]: E0217 17:44:51.105067 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:44:51 crc kubenswrapper[4680]: E0217 17:44:51.105153 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:44:51 crc kubenswrapper[4680]: E0217 17:44:51.105281 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.108866 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.109711 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.110402 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.110972 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.111519 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.112000 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.112618 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.113162 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.113783 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.114286 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.114865 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.115599 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.116066 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.116575 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.117103 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.118295 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.120128 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.120800 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.121183 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.122125 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.122673 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.123617 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.124153 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.124616 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.126072 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.126506 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.126828 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.127734 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.128490 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.129591 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.130266 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.131556 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.132111 4680 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.132227 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.134633 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.135288 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.135959 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.138233 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.138253 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.140995 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.141768 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.143073 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.143896 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.145198 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.146601 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.147950 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.148563 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.149620 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.150197 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.151216 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.151991 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.152825 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.153345 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.154028 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.154163 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.154691 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.155255 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.156257 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.173552 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.173593 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.173604 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.173620 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.173632 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:51Z","lastTransitionTime":"2026-02-17T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.176812 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.187730 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.197962 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.199985 4680 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-17 17:39:50 +0000 UTC, rotation deadline is 2027-01-11 03:59:54.025627178 +0000 UTC Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.200105 4680 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7858h15m2.825524749s for next certificate rotation Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.207555 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.215340 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0e21ab78a100fa7f3f532c5548be425f99e4f103c6696d19ef9146b16cdc2b9e"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.216679 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.216719 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"927f3dd3e267f9baf441182009f681be1bb02019ed2a3e0daffdb669617d4955"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.218364 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.218386 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.218397 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2a0521f0ced214b4708a685809a9a296f3683b2e9cec0ed7b4e42b63f35baa8a"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.220016 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-l5ckv" event={"ID":"68c51268-d876-4559-a54d-46984c40743c","Type":"ContainerStarted","Data":"03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.220038 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-l5ckv" event={"ID":"68c51268-d876-4559-a54d-46984c40743c","Type":"ContainerStarted","Data":"3d64bb036caa140d5eb08e3c0f906b0f540eef1ca9a101b884bb1035cf0a2ab0"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.230124 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.237709 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:51 crc kubenswrapper[4680]: E0217 17:44:51.244600 4680 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.244957 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.262985 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.275351 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.275391 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.275401 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.275415 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.275425 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:51Z","lastTransitionTime":"2026-02-17T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.299900 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.345993 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.370853 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.377308 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.377366 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.377377 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.377396 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.377408 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:51Z","lastTransitionTime":"2026-02-17T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.397391 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.412889 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.427184 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.439648 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.453200 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.475210 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.479184 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.479233 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.479245 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.479262 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.479275 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:51Z","lastTransitionTime":"2026-02-17T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.487565 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.502685 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.513765 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.581226 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.581267 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.581277 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.581293 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.581304 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:51Z","lastTransitionTime":"2026-02-17T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.604677 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-t2c5v"] Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.604983 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-rfw56"] Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.605108 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.605277 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.608009 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.608054 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.608666 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.608992 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.609083 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.609113 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.609225 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.609261 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.609292 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-sdjhh"] Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.609971 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.611519 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.611604 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.611899 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.612005 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.621879 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.635697 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.644122 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.667486 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.677666 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-os-release\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.677709 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-multus-cni-dir\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.677731 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/63b718d8-11fb-4613-af5a-d48392570e7b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.677750 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8dd08058-016e-4607-8be6-40463674c2fe-mcd-auth-proxy-config\") pod \"machine-config-daemon-rfw56\" (UID: \"8dd08058-016e-4607-8be6-40463674c2fe\") " pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.677770 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-cnibin\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.677789 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c056df58-0160-4cf6-ae0c-dc7095c8be95-multus-daemon-config\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.677807 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-system-cni-dir\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.677829 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv9pt\" (UniqueName: \"kubernetes.io/projected/c056df58-0160-4cf6-ae0c-dc7095c8be95-kube-api-access-xv9pt\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.677859 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/63b718d8-11fb-4613-af5a-d48392570e7b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.677916 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-run-k8s-cni-cncf-io\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.677946 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-run-netns\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.677997 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-multus-conf-dir\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.678026 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-var-lib-kubelet\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.678059 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjp7b\" (UniqueName: \"kubernetes.io/projected/63b718d8-11fb-4613-af5a-d48392570e7b-kube-api-access-sjp7b\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.678076 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8dd08058-016e-4607-8be6-40463674c2fe-proxy-tls\") pod \"machine-config-daemon-rfw56\" (UID: \"8dd08058-016e-4607-8be6-40463674c2fe\") " pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.678091 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-var-lib-cni-bin\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.678105 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c056df58-0160-4cf6-ae0c-dc7095c8be95-cni-binary-copy\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.678133 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-etc-kubernetes\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.678161 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/63b718d8-11fb-4613-af5a-d48392570e7b-system-cni-dir\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.678187 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-multus-socket-dir-parent\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.678220 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-var-lib-cni-multus\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.678240 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/63b718d8-11fb-4613-af5a-d48392570e7b-cnibin\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.678260 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8dd08058-016e-4607-8be6-40463674c2fe-rootfs\") pod \"machine-config-daemon-rfw56\" (UID: \"8dd08058-016e-4607-8be6-40463674c2fe\") " pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.678279 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-hostroot\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.678303 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89b9b\" (UniqueName: \"kubernetes.io/projected/8dd08058-016e-4607-8be6-40463674c2fe-kube-api-access-89b9b\") pod \"machine-config-daemon-rfw56\" (UID: \"8dd08058-016e-4607-8be6-40463674c2fe\") " pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.678355 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-run-multus-certs\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.678392 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/63b718d8-11fb-4613-af5a-d48392570e7b-os-release\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.678429 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/63b718d8-11fb-4613-af5a-d48392570e7b-cni-binary-copy\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.683060 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.683100 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.683116 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.683132 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.683145 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:51Z","lastTransitionTime":"2026-02-17T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.693949 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.712628 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.729539 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.742562 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.755669 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.768978 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780059 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780141 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c056df58-0160-4cf6-ae0c-dc7095c8be95-cni-binary-copy\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780165 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-etc-kubernetes\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780186 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/63b718d8-11fb-4613-af5a-d48392570e7b-system-cni-dir\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780206 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-multus-socket-dir-parent\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780234 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-var-lib-cni-multus\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780254 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/63b718d8-11fb-4613-af5a-d48392570e7b-cnibin\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780257 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/63b718d8-11fb-4613-af5a-d48392570e7b-system-cni-dir\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780273 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8dd08058-016e-4607-8be6-40463674c2fe-rootfs\") pod \"machine-config-daemon-rfw56\" (UID: \"8dd08058-016e-4607-8be6-40463674c2fe\") " pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780306 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8dd08058-016e-4607-8be6-40463674c2fe-rootfs\") pod \"machine-config-daemon-rfw56\" (UID: \"8dd08058-016e-4607-8be6-40463674c2fe\") " pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780303 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-var-lib-cni-multus\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780310 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/63b718d8-11fb-4613-af5a-d48392570e7b-cnibin\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: E0217 17:44:51.780345 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:44:53.780285986 +0000 UTC m=+23.331184665 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780383 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-multus-socket-dir-parent\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780402 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-hostroot\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780477 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-run-multus-certs\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780249 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-etc-kubernetes\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780498 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/63b718d8-11fb-4613-af5a-d48392570e7b-os-release\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780441 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-hostroot\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780518 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/63b718d8-11fb-4613-af5a-d48392570e7b-cni-binary-copy\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780563 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89b9b\" (UniqueName: \"kubernetes.io/projected/8dd08058-016e-4607-8be6-40463674c2fe-kube-api-access-89b9b\") pod \"machine-config-daemon-rfw56\" (UID: \"8dd08058-016e-4607-8be6-40463674c2fe\") " pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780532 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-run-multus-certs\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780605 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-os-release\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780606 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/63b718d8-11fb-4613-af5a-d48392570e7b-os-release\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780624 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-multus-cni-dir\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780664 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/63b718d8-11fb-4613-af5a-d48392570e7b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780675 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-os-release\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780696 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8dd08058-016e-4607-8be6-40463674c2fe-mcd-auth-proxy-config\") pod \"machine-config-daemon-rfw56\" (UID: \"8dd08058-016e-4607-8be6-40463674c2fe\") " pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780729 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-cnibin\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780749 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c056df58-0160-4cf6-ae0c-dc7095c8be95-multus-daemon-config\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780773 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-multus-cni-dir\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780795 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-cnibin\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780813 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-system-cni-dir\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780774 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-system-cni-dir\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780847 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv9pt\" (UniqueName: \"kubernetes.io/projected/c056df58-0160-4cf6-ae0c-dc7095c8be95-kube-api-access-xv9pt\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780867 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/63b718d8-11fb-4613-af5a-d48392570e7b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780888 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-run-k8s-cni-cncf-io\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780906 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-run-netns\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780927 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-multus-conf-dir\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780953 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-var-lib-kubelet\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780976 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjp7b\" (UniqueName: \"kubernetes.io/projected/63b718d8-11fb-4613-af5a-d48392570e7b-kube-api-access-sjp7b\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.780994 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-run-netns\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.781033 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-multus-conf-dir\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.781043 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-var-lib-cni-bin\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.781049 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-run-k8s-cni-cncf-io\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.781001 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-var-lib-cni-bin\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.781066 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c056df58-0160-4cf6-ae0c-dc7095c8be95-host-var-lib-kubelet\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.781089 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8dd08058-016e-4607-8be6-40463674c2fe-proxy-tls\") pod \"machine-config-daemon-rfw56\" (UID: \"8dd08058-016e-4607-8be6-40463674c2fe\") " pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.781275 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/63b718d8-11fb-4613-af5a-d48392570e7b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.781281 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c056df58-0160-4cf6-ae0c-dc7095c8be95-cni-binary-copy\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.781611 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/63b718d8-11fb-4613-af5a-d48392570e7b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.781646 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/63b718d8-11fb-4613-af5a-d48392570e7b-cni-binary-copy\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.781875 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8dd08058-016e-4607-8be6-40463674c2fe-mcd-auth-proxy-config\") pod \"machine-config-daemon-rfw56\" (UID: \"8dd08058-016e-4607-8be6-40463674c2fe\") " pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.782299 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c056df58-0160-4cf6-ae0c-dc7095c8be95-multus-daemon-config\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.785524 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.785553 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.785564 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.785580 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.785610 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:51Z","lastTransitionTime":"2026-02-17T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.785661 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8dd08058-016e-4607-8be6-40463674c2fe-proxy-tls\") pod \"machine-config-daemon-rfw56\" (UID: \"8dd08058-016e-4607-8be6-40463674c2fe\") " pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.794004 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.796887 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.803541 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjp7b\" (UniqueName: \"kubernetes.io/projected/63b718d8-11fb-4613-af5a-d48392570e7b-kube-api-access-sjp7b\") pod \"multus-additional-cni-plugins-sdjhh\" (UID: \"63b718d8-11fb-4613-af5a-d48392570e7b\") " pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.808486 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89b9b\" (UniqueName: \"kubernetes.io/projected/8dd08058-016e-4607-8be6-40463674c2fe-kube-api-access-89b9b\") pod \"machine-config-daemon-rfw56\" (UID: \"8dd08058-016e-4607-8be6-40463674c2fe\") " pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.816328 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv9pt\" (UniqueName: \"kubernetes.io/projected/c056df58-0160-4cf6-ae0c-dc7095c8be95-kube-api-access-xv9pt\") pod \"multus-t2c5v\" (UID: \"c056df58-0160-4cf6-ae0c-dc7095c8be95\") " pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.818791 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.829658 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.860467 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.875992 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.881710 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.881752 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.881772 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.881790 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:44:51 crc kubenswrapper[4680]: E0217 17:44:51.881881 4680 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 17:44:51 crc kubenswrapper[4680]: E0217 17:44:51.881888 4680 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 17:44:51 crc kubenswrapper[4680]: E0217 17:44:51.881924 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 17:44:53.881911222 +0000 UTC m=+23.432809891 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 17:44:51 crc kubenswrapper[4680]: E0217 17:44:51.881979 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 17:44:53.881955804 +0000 UTC m=+23.432854573 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 17:44:51 crc kubenswrapper[4680]: E0217 17:44:51.882076 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 17:44:51 crc kubenswrapper[4680]: E0217 17:44:51.882093 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 17:44:51 crc kubenswrapper[4680]: E0217 17:44:51.882106 4680 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:51 crc kubenswrapper[4680]: E0217 17:44:51.882135 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 17:44:53.882125999 +0000 UTC m=+23.433024688 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:51 crc kubenswrapper[4680]: E0217 17:44:51.882190 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 17:44:51 crc kubenswrapper[4680]: E0217 17:44:51.882201 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 17:44:51 crc kubenswrapper[4680]: E0217 17:44:51.882212 4680 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:51 crc kubenswrapper[4680]: E0217 17:44:51.882236 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 17:44:53.882228242 +0000 UTC m=+23.433126921 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.888065 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.888095 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.888103 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.888115 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.888125 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:51Z","lastTransitionTime":"2026-02-17T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.892590 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.914572 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.915702 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-t2c5v" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.922791 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.930815 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.952956 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.983270 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.993581 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.993618 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.993628 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.993644 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.993656 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:51Z","lastTransitionTime":"2026-02-17T17:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.997988 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8vx4n"] Feb 17 17:44:51 crc kubenswrapper[4680]: I0217 17:44:51.999542 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.004871 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.005042 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.005146 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.005344 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.005484 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.005576 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.005760 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.005864 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.024830 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.040362 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.041722 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.051979 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:58:44.115028453 +0000 UTC Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.052054 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.067731 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083237 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-run-ovn-kubernetes\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083267 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-node-log\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083285 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-etc-openvswitch\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083301 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-ovnkube-script-lib\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083332 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-cni-bin\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083360 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-run-netns\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083375 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-env-overrides\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083390 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-ovn\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083404 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-log-socket\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083419 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-systemd\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083435 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-systemd-units\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083451 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-var-lib-openvswitch\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083470 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-kubelet\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083483 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-slash\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083498 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083513 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plhbl\" (UniqueName: \"kubernetes.io/projected/2733b1c1-6f98-4184-b59b-123435eeb569-kube-api-access-plhbl\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083527 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-openvswitch\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083540 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-ovnkube-config\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083556 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2733b1c1-6f98-4184-b59b-123435eeb569-ovn-node-metrics-cert\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.083583 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-cni-netd\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.095700 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.098569 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.098606 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.098615 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.098632 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.098643 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:52Z","lastTransitionTime":"2026-02-17T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.108091 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.119379 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.134194 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.135363 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.152231 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184160 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-ovn\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184204 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-log-socket\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184227 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-systemd\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184245 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-systemd-units\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184266 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-var-lib-openvswitch\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184283 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-kubelet\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184299 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-slash\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184340 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184364 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plhbl\" (UniqueName: \"kubernetes.io/projected/2733b1c1-6f98-4184-b59b-123435eeb569-kube-api-access-plhbl\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184394 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-openvswitch\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184413 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-ovnkube-config\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184482 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2733b1c1-6f98-4184-b59b-123435eeb569-ovn-node-metrics-cert\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184511 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-cni-netd\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184538 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-run-ovn-kubernetes\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184559 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-node-log\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184583 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-etc-openvswitch\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184593 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-ovn\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184622 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-openvswitch\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184605 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-cni-netd\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184605 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-ovnkube-script-lib\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184553 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-log-socket\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184691 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-cni-bin\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184576 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-systemd\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184744 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-run-netns\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184752 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-run-ovn-kubernetes\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184784 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-cni-bin\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184746 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-etc-openvswitch\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184792 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184722 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-run-netns\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184825 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-kubelet\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.185372 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-ovnkube-config\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.185415 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-ovnkube-script-lib\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.185999 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-env-overrides\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.186490 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-env-overrides\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184676 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-systemd-units\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.186540 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-var-lib-openvswitch\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184794 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-slash\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.190308 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2733b1c1-6f98-4184-b59b-123435eeb569-ovn-node-metrics-cert\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.184721 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-node-log\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.205571 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.205617 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.205627 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.205642 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.205652 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:52Z","lastTransitionTime":"2026-02-17T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.226880 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plhbl\" (UniqueName: \"kubernetes.io/projected/2733b1c1-6f98-4184-b59b-123435eeb569-kube-api-access-plhbl\") pod \"ovnkube-node-8vx4n\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.227132 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.230981 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40"} Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.231048 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6"} Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.231064 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"aa7e6fa2eb008bee0215b4b70676320f0aca9d3bbbf864fc12024a38834a9ea4"} Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.234181 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t2c5v" event={"ID":"c056df58-0160-4cf6-ae0c-dc7095c8be95","Type":"ContainerStarted","Data":"7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c"} Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.234225 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t2c5v" event={"ID":"c056df58-0160-4cf6-ae0c-dc7095c8be95","Type":"ContainerStarted","Data":"51c045724e6d788135d685c8a64c40adf3f909654ab1808f2ba9c64a575d5186"} Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.249534 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" event={"ID":"63b718d8-11fb-4613-af5a-d48392570e7b","Type":"ContainerStarted","Data":"348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383"} Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.249597 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" event={"ID":"63b718d8-11fb-4613-af5a-d48392570e7b","Type":"ContainerStarted","Data":"24a9041428d871d4dcc20a4c60e4fc37d2636d32dce0d96da8ef27c42611327d"} Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.262511 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.275749 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.287823 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.297957 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.307596 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.307627 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.307637 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.307653 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.307663 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:52Z","lastTransitionTime":"2026-02-17T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.310265 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.316639 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:52 crc kubenswrapper[4680]: W0217 17:44:52.332363 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2733b1c1_6f98_4184_b59b_123435eeb569.slice/crio-1c1a2c5afbc9ce2fa8901afc214aeec3824ac8359905609213a86b17ab50f391 WatchSource:0}: Error finding container 1c1a2c5afbc9ce2fa8901afc214aeec3824ac8359905609213a86b17ab50f391: Status 404 returned error can't find the container with id 1c1a2c5afbc9ce2fa8901afc214aeec3824ac8359905609213a86b17ab50f391 Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.352915 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.404017 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.409859 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.409900 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.409913 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.409929 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.409940 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:52Z","lastTransitionTime":"2026-02-17T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.433199 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.471917 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.510240 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.511723 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.511758 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.511767 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.511782 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.511790 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:52Z","lastTransitionTime":"2026-02-17T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.552071 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.591393 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.613915 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.613946 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.613955 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.613968 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.613979 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:52Z","lastTransitionTime":"2026-02-17T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.632096 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.672560 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.716246 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.716274 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.716284 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.716299 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.716308 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:52Z","lastTransitionTime":"2026-02-17T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.818179 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.818213 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.818221 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.818233 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.818242 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:52Z","lastTransitionTime":"2026-02-17T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.920598 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.920635 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.920646 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.920660 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:52 crc kubenswrapper[4680]: I0217 17:44:52.920670 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:52Z","lastTransitionTime":"2026-02-17T17:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.023100 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.023145 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.023159 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.023180 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.023195 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:53Z","lastTransitionTime":"2026-02-17T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.052800 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 07:06:42.102054312 +0000 UTC Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.105113 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.105113 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:44:53 crc kubenswrapper[4680]: E0217 17:44:53.105230 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.105113 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:44:53 crc kubenswrapper[4680]: E0217 17:44:53.105397 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:44:53 crc kubenswrapper[4680]: E0217 17:44:53.105309 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.111728 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.124257 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.124570 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.125594 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.125642 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.125653 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.125666 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.125679 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:53Z","lastTransitionTime":"2026-02-17T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.131145 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.148682 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-xpqrx"] Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.149245 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xpqrx" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.153561 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.153623 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.153732 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.153784 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.154347 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.172545 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.187955 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.195633 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f833b509-7a9e-4d56-be02-43df3d11b5ad-host\") pod \"node-ca-xpqrx\" (UID: \"f833b509-7a9e-4d56-be02-43df3d11b5ad\") " pod="openshift-image-registry/node-ca-xpqrx" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.195724 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f833b509-7a9e-4d56-be02-43df3d11b5ad-serviceca\") pod \"node-ca-xpqrx\" (UID: \"f833b509-7a9e-4d56-be02-43df3d11b5ad\") " pod="openshift-image-registry/node-ca-xpqrx" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.195764 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxjq4\" (UniqueName: \"kubernetes.io/projected/f833b509-7a9e-4d56-be02-43df3d11b5ad-kube-api-access-jxjq4\") pod \"node-ca-xpqrx\" (UID: \"f833b509-7a9e-4d56-be02-43df3d11b5ad\") " pod="openshift-image-registry/node-ca-xpqrx" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.200344 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.213410 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.224178 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.227523 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.227620 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.227632 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.227647 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.227658 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:53Z","lastTransitionTime":"2026-02-17T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.237393 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.253129 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333"} Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.254952 4680 generic.go:334] "Generic (PLEG): container finished" podID="63b718d8-11fb-4613-af5a-d48392570e7b" containerID="348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383" exitCode=0 Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.255022 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" event={"ID":"63b718d8-11fb-4613-af5a-d48392570e7b","Type":"ContainerDied","Data":"348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383"} Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.255879 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.258213 4680 generic.go:334] "Generic (PLEG): container finished" podID="2733b1c1-6f98-4184-b59b-123435eeb569" containerID="731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024" exitCode=0 Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.258449 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerDied","Data":"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024"} Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.258481 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerStarted","Data":"1c1a2c5afbc9ce2fa8901afc214aeec3824ac8359905609213a86b17ab50f391"} Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.284149 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.296038 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.296231 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxjq4\" (UniqueName: \"kubernetes.io/projected/f833b509-7a9e-4d56-be02-43df3d11b5ad-kube-api-access-jxjq4\") pod \"node-ca-xpqrx\" (UID: \"f833b509-7a9e-4d56-be02-43df3d11b5ad\") " pod="openshift-image-registry/node-ca-xpqrx" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.296312 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f833b509-7a9e-4d56-be02-43df3d11b5ad-host\") pod \"node-ca-xpqrx\" (UID: \"f833b509-7a9e-4d56-be02-43df3d11b5ad\") " pod="openshift-image-registry/node-ca-xpqrx" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.296483 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f833b509-7a9e-4d56-be02-43df3d11b5ad-serviceca\") pod \"node-ca-xpqrx\" (UID: \"f833b509-7a9e-4d56-be02-43df3d11b5ad\") " pod="openshift-image-registry/node-ca-xpqrx" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.297636 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f833b509-7a9e-4d56-be02-43df3d11b5ad-serviceca\") pod \"node-ca-xpqrx\" (UID: \"f833b509-7a9e-4d56-be02-43df3d11b5ad\") " pod="openshift-image-registry/node-ca-xpqrx" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.299123 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f833b509-7a9e-4d56-be02-43df3d11b5ad-host\") pod \"node-ca-xpqrx\" (UID: \"f833b509-7a9e-4d56-be02-43df3d11b5ad\") " pod="openshift-image-registry/node-ca-xpqrx" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.311418 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.327228 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxjq4\" (UniqueName: \"kubernetes.io/projected/f833b509-7a9e-4d56-be02-43df3d11b5ad-kube-api-access-jxjq4\") pod \"node-ca-xpqrx\" (UID: \"f833b509-7a9e-4d56-be02-43df3d11b5ad\") " pod="openshift-image-registry/node-ca-xpqrx" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.330092 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.332192 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.332213 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.332222 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.332234 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.332243 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:53Z","lastTransitionTime":"2026-02-17T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.350660 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.406997 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.439734 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.439769 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.439780 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.439795 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.439803 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:53Z","lastTransitionTime":"2026-02-17T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.445774 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.476113 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.512599 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.541991 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.542028 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.542037 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.542059 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.542069 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:53Z","lastTransitionTime":"2026-02-17T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.544171 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xpqrx" Feb 17 17:44:53 crc kubenswrapper[4680]: W0217 17:44:53.553189 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf833b509_7a9e_4d56_be02_43df3d11b5ad.slice/crio-938676362bfd8535e334932506b175083bb9152dc6b3f609289f0b79a73666d7 WatchSource:0}: Error finding container 938676362bfd8535e334932506b175083bb9152dc6b3f609289f0b79a73666d7: Status 404 returned error can't find the container with id 938676362bfd8535e334932506b175083bb9152dc6b3f609289f0b79a73666d7 Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.555747 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.593018 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.631300 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.644434 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.644472 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.644484 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.644500 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.644512 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:53Z","lastTransitionTime":"2026-02-17T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.672479 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.710045 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.748032 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.748064 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.748072 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.748084 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.748094 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:53Z","lastTransitionTime":"2026-02-17T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.755565 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.798215 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.804098 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:44:53 crc kubenswrapper[4680]: E0217 17:44:53.804418 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:44:57.804398695 +0000 UTC m=+27.355297374 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.832551 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:53Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.850575 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.850607 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.850632 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.850646 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.850655 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:53Z","lastTransitionTime":"2026-02-17T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.905057 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.905451 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.905475 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:44:53 crc kubenswrapper[4680]: E0217 17:44:53.905265 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 17:44:53 crc kubenswrapper[4680]: E0217 17:44:53.905536 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 17:44:53 crc kubenswrapper[4680]: E0217 17:44:53.905553 4680 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:53 crc kubenswrapper[4680]: E0217 17:44:53.905564 4680 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.905505 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:44:53 crc kubenswrapper[4680]: E0217 17:44:53.905616 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 17:44:57.905601047 +0000 UTC m=+27.456499726 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 17:44:53 crc kubenswrapper[4680]: E0217 17:44:53.905633 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 17:44:57.905624798 +0000 UTC m=+27.456523477 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:53 crc kubenswrapper[4680]: E0217 17:44:53.905700 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 17:44:53 crc kubenswrapper[4680]: E0217 17:44:53.905730 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 17:44:53 crc kubenswrapper[4680]: E0217 17:44:53.905744 4680 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:53 crc kubenswrapper[4680]: E0217 17:44:53.905747 4680 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 17:44:53 crc kubenswrapper[4680]: E0217 17:44:53.905790 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 17:44:57.905781753 +0000 UTC m=+27.456680442 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 17:44:53 crc kubenswrapper[4680]: E0217 17:44:53.905809 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 17:44:57.905801803 +0000 UTC m=+27.456700482 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.953170 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.953241 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.953259 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.953301 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:53 crc kubenswrapper[4680]: I0217 17:44:53.953947 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:53Z","lastTransitionTime":"2026-02-17T17:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.053766 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 14:08:19.006958826 +0000 UTC Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.057114 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.057159 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.057176 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.057200 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.057224 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:54Z","lastTransitionTime":"2026-02-17T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.090438 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.107419 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.110377 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.110741 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.124184 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.140199 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.153634 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.159945 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.159974 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.159984 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.160002 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.160014 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:54Z","lastTransitionTime":"2026-02-17T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.166017 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.181359 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.193010 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.204439 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.219002 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.255917 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.262959 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.263031 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.263052 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.263081 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.263099 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:54Z","lastTransitionTime":"2026-02-17T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.265271 4680 generic.go:334] "Generic (PLEG): container finished" podID="63b718d8-11fb-4613-af5a-d48392570e7b" containerID="1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f" exitCode=0 Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.265422 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" event={"ID":"63b718d8-11fb-4613-af5a-d48392570e7b","Type":"ContainerDied","Data":"1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.271732 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerStarted","Data":"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.271785 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerStarted","Data":"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.271797 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerStarted","Data":"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.271810 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerStarted","Data":"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.271821 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerStarted","Data":"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.271831 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerStarted","Data":"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.272817 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xpqrx" event={"ID":"f833b509-7a9e-4d56-be02-43df3d11b5ad","Type":"ContainerStarted","Data":"c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.272862 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xpqrx" event={"ID":"f833b509-7a9e-4d56-be02-43df3d11b5ad","Type":"ContainerStarted","Data":"938676362bfd8535e334932506b175083bb9152dc6b3f609289f0b79a73666d7"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.294604 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: E0217 17:44:54.309831 4680 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.351225 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.365659 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.365694 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.365704 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.365720 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.365733 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:54Z","lastTransitionTime":"2026-02-17T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.393509 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.430580 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.468106 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.468137 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.468149 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.468168 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.468179 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:54Z","lastTransitionTime":"2026-02-17T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.473703 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.510282 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.552416 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.573905 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.573949 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.573960 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.573975 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.573986 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:54Z","lastTransitionTime":"2026-02-17T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.591889 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.633384 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.671087 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.679015 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.679097 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.679123 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.679152 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.679177 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:54Z","lastTransitionTime":"2026-02-17T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.717308 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.756936 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.782018 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.782056 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.782065 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.782080 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.782089 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:54Z","lastTransitionTime":"2026-02-17T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.790969 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.846596 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.874337 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.884471 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.884505 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.884518 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.884536 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.884549 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:54Z","lastTransitionTime":"2026-02-17T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.909878 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.951435 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.987453 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.987505 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.987522 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.987545 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.987561 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:54Z","lastTransitionTime":"2026-02-17T17:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:54 crc kubenswrapper[4680]: I0217 17:44:54.993242 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:54Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.031272 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:55Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.054783 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:41:46.553920408 +0000 UTC Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.090212 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.090258 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.090277 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.090300 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.090381 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:55Z","lastTransitionTime":"2026-02-17T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.104874 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.104913 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.104874 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:44:55 crc kubenswrapper[4680]: E0217 17:44:55.104977 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:44:55 crc kubenswrapper[4680]: E0217 17:44:55.105025 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:44:55 crc kubenswrapper[4680]: E0217 17:44:55.105072 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.192760 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.192798 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.192809 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.192823 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.192832 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:55Z","lastTransitionTime":"2026-02-17T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.279974 4680 generic.go:334] "Generic (PLEG): container finished" podID="63b718d8-11fb-4613-af5a-d48392570e7b" containerID="a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f" exitCode=0 Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.280065 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" event={"ID":"63b718d8-11fb-4613-af5a-d48392570e7b","Type":"ContainerDied","Data":"a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f"} Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.294760 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.294788 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.294798 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.294814 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.294824 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:55Z","lastTransitionTime":"2026-02-17T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.301266 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:55Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.333725 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:55Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.351691 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:55Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.364177 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:55Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.382444 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:55Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.394506 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:55Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.397221 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.397272 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.397286 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.397305 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.397383 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:55Z","lastTransitionTime":"2026-02-17T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.404472 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:55Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.416053 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:55Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.422448 4680 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.428609 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:55Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.450094 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:55Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.488538 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:55Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.499478 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.499504 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.499514 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.499564 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.499578 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:55Z","lastTransitionTime":"2026-02-17T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.532497 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:55Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.573788 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:55Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.601876 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.601928 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.601938 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.601953 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.601964 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:55Z","lastTransitionTime":"2026-02-17T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.615350 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:55Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.649822 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:55Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.704575 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.704607 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.704616 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.704629 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.704637 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:55Z","lastTransitionTime":"2026-02-17T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.807078 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.807108 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.807118 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.807129 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.807137 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:55Z","lastTransitionTime":"2026-02-17T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.909892 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.909937 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.909949 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.909968 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:55 crc kubenswrapper[4680]: I0217 17:44:55.909980 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:55Z","lastTransitionTime":"2026-02-17T17:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.012253 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.012289 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.012302 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.012343 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.012358 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:56Z","lastTransitionTime":"2026-02-17T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.055846 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 08:12:53.447462625 +0000 UTC Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.114973 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.115007 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.115018 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.115035 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.115046 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:56Z","lastTransitionTime":"2026-02-17T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.217074 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.217146 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.217163 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.217178 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.217229 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:56Z","lastTransitionTime":"2026-02-17T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.287946 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerStarted","Data":"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99"} Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.290466 4680 generic.go:334] "Generic (PLEG): container finished" podID="63b718d8-11fb-4613-af5a-d48392570e7b" containerID="f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702" exitCode=0 Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.290492 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" event={"ID":"63b718d8-11fb-4613-af5a-d48392570e7b","Type":"ContainerDied","Data":"f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702"} Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.309856 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:56Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.319821 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.319866 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.319879 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.319896 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.319908 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:56Z","lastTransitionTime":"2026-02-17T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.331763 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:56Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.346943 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:56Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.356335 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:56Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.370068 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:56Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.382975 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:56Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.398555 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:56Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.413210 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:56Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.423338 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.423369 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.423378 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.423391 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.423400 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:56Z","lastTransitionTime":"2026-02-17T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.426543 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:56Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.445548 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:56Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.458603 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:56Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.468648 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:56Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.480244 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:56Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.497131 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:56Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.515878 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:56Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.525914 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.525954 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.525963 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.525976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.525987 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:56Z","lastTransitionTime":"2026-02-17T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.628358 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.628384 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.628391 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.628403 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.628412 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:56Z","lastTransitionTime":"2026-02-17T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.768927 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.768952 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.768960 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.768973 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.768981 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:56Z","lastTransitionTime":"2026-02-17T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.871742 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.871781 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.871791 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.871807 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.871817 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:56Z","lastTransitionTime":"2026-02-17T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.984974 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.985026 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.985042 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.985061 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:56 crc kubenswrapper[4680]: I0217 17:44:56.985077 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:56Z","lastTransitionTime":"2026-02-17T17:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.056853 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 14:21:37.302565828 +0000 UTC Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.087468 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.087493 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.087502 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.087514 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.087522 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:57Z","lastTransitionTime":"2026-02-17T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.105448 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.105542 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:44:57 crc kubenswrapper[4680]: E0217 17:44:57.105695 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:44:57 crc kubenswrapper[4680]: E0217 17:44:57.106498 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.106585 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:44:57 crc kubenswrapper[4680]: E0217 17:44:57.106647 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.189353 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.189430 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.189446 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.189467 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.189506 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:57Z","lastTransitionTime":"2026-02-17T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.292520 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.292558 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.292566 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.292579 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.292589 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:57Z","lastTransitionTime":"2026-02-17T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.300406 4680 generic.go:334] "Generic (PLEG): container finished" podID="63b718d8-11fb-4613-af5a-d48392570e7b" containerID="0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2" exitCode=0 Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.300456 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" event={"ID":"63b718d8-11fb-4613-af5a-d48392570e7b","Type":"ContainerDied","Data":"0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2"} Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.323200 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:57Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.338671 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:57Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.350510 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:57Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.360590 4680 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.361555 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:57Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.379459 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:57Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.391473 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:57Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.395478 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.395725 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.395735 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.395748 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.395757 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:57Z","lastTransitionTime":"2026-02-17T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.410780 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:57Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.425290 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:57Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.436626 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:57Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.446388 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:57Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.459304 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:57Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.471013 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:57Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.481934 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:57Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.493480 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:57Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.499752 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.499783 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.499794 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.499809 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.499821 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:57Z","lastTransitionTime":"2026-02-17T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.503843 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:57Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.602298 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.602348 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.602360 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.602372 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.602381 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:57Z","lastTransitionTime":"2026-02-17T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.704602 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.704686 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.704712 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.704734 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.704778 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:57Z","lastTransitionTime":"2026-02-17T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.807865 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.807941 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.807958 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.807980 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.808022 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:57Z","lastTransitionTime":"2026-02-17T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.874371 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:44:57 crc kubenswrapper[4680]: E0217 17:44:57.874617 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:45:05.874591188 +0000 UTC m=+35.425489867 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.910257 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.910291 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.910300 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.910330 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.910343 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:57Z","lastTransitionTime":"2026-02-17T17:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.975172 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.975219 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.975250 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:44:57 crc kubenswrapper[4680]: I0217 17:44:57.975272 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:44:57 crc kubenswrapper[4680]: E0217 17:44:57.975304 4680 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 17:44:57 crc kubenswrapper[4680]: E0217 17:44:57.975649 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:05.975613375 +0000 UTC m=+35.526512054 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 17:44:57 crc kubenswrapper[4680]: E0217 17:44:57.975758 4680 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 17:44:57 crc kubenswrapper[4680]: E0217 17:44:57.975797 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 17:44:57 crc kubenswrapper[4680]: E0217 17:44:57.975826 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 17:44:57 crc kubenswrapper[4680]: E0217 17:44:57.975846 4680 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:57 crc kubenswrapper[4680]: E0217 17:44:57.975888 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:05.975843142 +0000 UTC m=+35.526741831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 17:44:57 crc kubenswrapper[4680]: E0217 17:44:57.975916 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:05.975898614 +0000 UTC m=+35.526797313 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:57 crc kubenswrapper[4680]: E0217 17:44:57.977709 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 17:44:57 crc kubenswrapper[4680]: E0217 17:44:57.977757 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 17:44:57 crc kubenswrapper[4680]: E0217 17:44:57.977778 4680 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:57 crc kubenswrapper[4680]: E0217 17:44:57.977850 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:05.977830234 +0000 UTC m=+35.528728943 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.012640 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.012719 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.012736 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.012759 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.012821 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:58Z","lastTransitionTime":"2026-02-17T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.057468 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 15:23:08.658623714 +0000 UTC Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.114809 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.114855 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.114869 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.114884 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.114897 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:58Z","lastTransitionTime":"2026-02-17T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.218024 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.218056 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.218065 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.218078 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.218088 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:58Z","lastTransitionTime":"2026-02-17T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.305370 4680 generic.go:334] "Generic (PLEG): container finished" podID="63b718d8-11fb-4613-af5a-d48392570e7b" containerID="df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268" exitCode=0 Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.305406 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" event={"ID":"63b718d8-11fb-4613-af5a-d48392570e7b","Type":"ContainerDied","Data":"df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268"} Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.319262 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:58Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.320796 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.320821 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.320830 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.320844 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.320854 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:58Z","lastTransitionTime":"2026-02-17T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.330816 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:58Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.341775 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:58Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.355429 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:58Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.373281 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:58Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.391583 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:58Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.405456 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:58Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.424031 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.424085 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.424094 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.424107 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.424115 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:58Z","lastTransitionTime":"2026-02-17T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.425423 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:58Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.438619 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:58Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.452440 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:58Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.465342 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:58Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.476833 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:58Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.503665 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:58Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.526080 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.526118 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.526129 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.526143 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.526155 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:58Z","lastTransitionTime":"2026-02-17T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.536086 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:58Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.553095 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:58Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.628632 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.628672 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.628684 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.628699 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.628709 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:58Z","lastTransitionTime":"2026-02-17T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.731932 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.731974 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.731984 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.731997 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.732008 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:58Z","lastTransitionTime":"2026-02-17T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.833890 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.833925 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.833933 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.833947 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.833956 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:58Z","lastTransitionTime":"2026-02-17T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.935786 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.935839 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.935855 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.935879 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:58 crc kubenswrapper[4680]: I0217 17:44:58.935896 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:58Z","lastTransitionTime":"2026-02-17T17:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.038593 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.038623 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.038633 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.038647 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.038658 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:59Z","lastTransitionTime":"2026-02-17T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.058147 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 09:53:04.088472256 +0000 UTC Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.104769 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.104782 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.104836 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:44:59 crc kubenswrapper[4680]: E0217 17:44:59.104885 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:44:59 crc kubenswrapper[4680]: E0217 17:44:59.105019 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:44:59 crc kubenswrapper[4680]: E0217 17:44:59.105227 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.141377 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.141406 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.141414 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.141429 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.141438 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:59Z","lastTransitionTime":"2026-02-17T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.243733 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.243960 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.244202 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.244384 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.244573 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:59Z","lastTransitionTime":"2026-02-17T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.315727 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" event={"ID":"63b718d8-11fb-4613-af5a-d48392570e7b","Type":"ContainerStarted","Data":"d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861"} Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.323627 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerStarted","Data":"ac32401c95e56aad2b73b754e4d6021d45a4bb48ab58f26cbca28c2976e8ac28"} Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.324186 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.324227 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.324238 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.334455 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.347208 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.347249 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.347260 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.347274 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.347286 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:59Z","lastTransitionTime":"2026-02-17T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.349937 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.351378 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.352562 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.363189 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.384804 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.396298 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.406615 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.415590 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.426728 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.440642 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.449511 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.449542 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.449551 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.449566 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.449575 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:59Z","lastTransitionTime":"2026-02-17T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.451389 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.460112 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.474701 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.486637 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.499505 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.510430 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.520613 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.530602 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.540186 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.551478 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.552350 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.552382 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.552392 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.552407 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.552418 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:59Z","lastTransitionTime":"2026-02-17T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.565427 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.577003 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.590482 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.606011 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.625088 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.639292 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.653994 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.654862 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.654985 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.655045 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.655167 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.655244 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:59Z","lastTransitionTime":"2026-02-17T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.664986 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.677429 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.692666 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac32401c95e56aad2b73b754e4d6021d45a4bb48ab58f26cbca28c2976e8ac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.704452 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:44:59Z is after 2025-08-24T17:21:41Z" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.757142 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.757189 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.757201 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.757223 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.757237 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:59Z","lastTransitionTime":"2026-02-17T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.859880 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.859916 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.859928 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.859943 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.859954 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:59Z","lastTransitionTime":"2026-02-17T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.962090 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.962139 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.962151 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.962210 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:44:59 crc kubenswrapper[4680]: I0217 17:44:59.962222 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:44:59Z","lastTransitionTime":"2026-02-17T17:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.058701 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:23:46.12235362 +0000 UTC Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.064288 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.064370 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.064387 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.064411 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.064428 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:00Z","lastTransitionTime":"2026-02-17T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.167147 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.167184 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.167194 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.167209 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.167220 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:00Z","lastTransitionTime":"2026-02-17T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.269490 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.269527 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.269536 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.269548 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.269557 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:00Z","lastTransitionTime":"2026-02-17T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.292653 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.292689 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.292699 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.292713 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.292724 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:00Z","lastTransitionTime":"2026-02-17T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:00 crc kubenswrapper[4680]: E0217 17:45:00.303799 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:00Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.306988 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.307021 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.307029 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.307077 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.307090 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:00Z","lastTransitionTime":"2026-02-17T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:00 crc kubenswrapper[4680]: E0217 17:45:00.319363 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:00Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.322176 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.322203 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.322211 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.322222 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.322232 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:00Z","lastTransitionTime":"2026-02-17T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:00 crc kubenswrapper[4680]: E0217 17:45:00.332496 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:00Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.335693 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.335728 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.335738 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.335753 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.335765 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:00Z","lastTransitionTime":"2026-02-17T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:00 crc kubenswrapper[4680]: E0217 17:45:00.345710 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:00Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.348886 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.348928 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.348941 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.348958 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.348969 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:00Z","lastTransitionTime":"2026-02-17T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:00 crc kubenswrapper[4680]: E0217 17:45:00.362098 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:00Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:00 crc kubenswrapper[4680]: E0217 17:45:00.362212 4680 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.371417 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.371458 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.371468 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.371481 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.371495 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:00Z","lastTransitionTime":"2026-02-17T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.473281 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.473331 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.473341 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.473353 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.473364 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:00Z","lastTransitionTime":"2026-02-17T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.575192 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.575231 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.575240 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.575255 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.575266 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:00Z","lastTransitionTime":"2026-02-17T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.677232 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.677271 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.677279 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.677292 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.677302 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:00Z","lastTransitionTime":"2026-02-17T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.779468 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.779500 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.779509 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.779523 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.779532 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:00Z","lastTransitionTime":"2026-02-17T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.881560 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.881800 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.881866 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.881937 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.881993 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:00Z","lastTransitionTime":"2026-02-17T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.984541 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.984579 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.984589 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.984602 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:00 crc kubenswrapper[4680]: I0217 17:45:00.984613 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:00Z","lastTransitionTime":"2026-02-17T17:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.059075 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 22:25:59.20884634 +0000 UTC Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.075475 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.087300 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.087361 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.087373 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.087390 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.087399 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:01Z","lastTransitionTime":"2026-02-17T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.088951 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.101495 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.104205 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.104268 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.104268 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:01 crc kubenswrapper[4680]: E0217 17:45:01.104468 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:01 crc kubenswrapper[4680]: E0217 17:45:01.104544 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:01 crc kubenswrapper[4680]: E0217 17:45:01.104708 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.121492 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.133340 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.143373 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.153371 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.163873 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.176666 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.189229 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.189260 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.189270 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.189287 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.189298 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:01Z","lastTransitionTime":"2026-02-17T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.195873 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac32401c95e56aad2b73b754e4d6021d45a4bb48ab58f26cbca28c2976e8ac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.207339 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.225638 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.238930 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.253509 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.267405 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.278361 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.287914 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.291011 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.291041 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.291050 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.291065 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.291073 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:01Z","lastTransitionTime":"2026-02-17T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.299205 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.311235 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.323623 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.329441 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovnkube-controller/0.log" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.332298 4680 generic.go:334] "Generic (PLEG): container finished" podID="2733b1c1-6f98-4184-b59b-123435eeb569" containerID="ac32401c95e56aad2b73b754e4d6021d45a4bb48ab58f26cbca28c2976e8ac28" exitCode=1 Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.332347 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerDied","Data":"ac32401c95e56aad2b73b754e4d6021d45a4bb48ab58f26cbca28c2976e8ac28"} Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.333260 4680 scope.go:117] "RemoveContainer" containerID="ac32401c95e56aad2b73b754e4d6021d45a4bb48ab58f26cbca28c2976e8ac28" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.341602 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.355724 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.373306 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.384301 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.394013 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.394137 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.394157 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.394167 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.394180 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.394189 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:01Z","lastTransitionTime":"2026-02-17T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.403623 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.418382 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.443095 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac32401c95e56aad2b73b754e4d6021d45a4bb48ab58f26cbca28c2976e8ac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.456493 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.467369 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.477432 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.489795 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.499169 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.499193 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.499202 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.499218 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.499229 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:01Z","lastTransitionTime":"2026-02-17T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.502063 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.515123 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.529353 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.539672 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.556552 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.567163 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.576568 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.585615 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.599338 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.601483 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.601519 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.601531 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.601560 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.601569 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:01Z","lastTransitionTime":"2026-02-17T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.622510 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac32401c95e56aad2b73b754e4d6021d45a4bb48ab58f26cbca28c2976e8ac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac32401c95e56aad2b73b754e4d6021d45a4bb48ab58f26cbca28c2976e8ac28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 17:45:01.113380 5907 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 17:45:01.113396 5907 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 17:45:01.113429 5907 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 17:45:01.113436 5907 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 17:45:01.113458 5907 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 17:45:01.113473 5907 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 17:45:01.113489 5907 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 17:45:01.113502 5907 factory.go:656] Stopping watch factory\\\\nI0217 17:45:01.113516 5907 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 17:45:01.113523 5907 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 17:45:01.113535 5907 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 17:45:01.113541 5907 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 17:45:01.113547 5907 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 17:45:01.113552 5907 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 17:45:01.113558 5907 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.636021 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.649221 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.662585 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.674014 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.703806 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.703842 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.703852 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.703866 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.703877 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:01Z","lastTransitionTime":"2026-02-17T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.805994 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.806057 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.806066 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.806082 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.806091 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:01Z","lastTransitionTime":"2026-02-17T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.908028 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.908063 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.908071 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.908085 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:01 crc kubenswrapper[4680]: I0217 17:45:01.908093 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:01Z","lastTransitionTime":"2026-02-17T17:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.010477 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.010525 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.010534 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.010546 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.010554 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:02Z","lastTransitionTime":"2026-02-17T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.060148 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 05:32:04.87289917 +0000 UTC Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.113128 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.113157 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.113165 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.113178 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.113186 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:02Z","lastTransitionTime":"2026-02-17T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.215085 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.215117 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.215126 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.215138 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.215163 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:02Z","lastTransitionTime":"2026-02-17T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.316947 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.316977 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.316988 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.317003 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.317016 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:02Z","lastTransitionTime":"2026-02-17T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.336587 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovnkube-controller/1.log" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.337014 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovnkube-controller/0.log" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.338802 4680 generic.go:334] "Generic (PLEG): container finished" podID="2733b1c1-6f98-4184-b59b-123435eeb569" containerID="2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a" exitCode=1 Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.338832 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerDied","Data":"2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a"} Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.338860 4680 scope.go:117] "RemoveContainer" containerID="ac32401c95e56aad2b73b754e4d6021d45a4bb48ab58f26cbca28c2976e8ac28" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.339438 4680 scope.go:117] "RemoveContainer" containerID="2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a" Feb 17 17:45:02 crc kubenswrapper[4680]: E0217 17:45:02.339609 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.352738 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.363313 4680 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.365035 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.375060 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.385381 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.395050 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.404161 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.416860 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.419120 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.419158 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.419166 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.419181 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.419192 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:02Z","lastTransitionTime":"2026-02-17T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.433339 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac32401c95e56aad2b73b754e4d6021d45a4bb48ab58f26cbca28c2976e8ac28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 17:45:01.113380 5907 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 17:45:01.113396 5907 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 17:45:01.113429 5907 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 17:45:01.113436 5907 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 17:45:01.113458 5907 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 17:45:01.113473 5907 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 17:45:01.113489 5907 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 17:45:01.113502 5907 factory.go:656] Stopping watch factory\\\\nI0217 17:45:01.113516 5907 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 17:45:01.113523 5907 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 17:45:01.113535 5907 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 17:45:01.113541 5907 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 17:45:01.113547 5907 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 17:45:01.113552 5907 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 17:45:01.113558 5907 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:02Z\\\",\\\"message\\\":\\\"] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 17:45:02.054941 6033 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 17:45:02.054955 6033 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054964 6033 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054971 6033 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-t2c5v in node crc\\\\nI0217 17:45:02.054976 6033 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-t2c5v after 0 failed attempt(s)\\\\nI0217 17:45:02.054983 6033 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054991 6033 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0217 17:45:02.054997 6033 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0217 17:45:02.055002 6033 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0217 17:45:02.055001 6033 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.447187 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.464229 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.474468 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.484983 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.495163 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.505823 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.516395 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.520885 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.520910 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.520919 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.520930 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.520939 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:02Z","lastTransitionTime":"2026-02-17T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.623082 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.623143 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.623155 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.623172 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.623183 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:02Z","lastTransitionTime":"2026-02-17T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.725200 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.725261 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.725270 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.725285 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.725295 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:02Z","lastTransitionTime":"2026-02-17T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.826941 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.826983 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.826994 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.827007 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.827015 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:02Z","lastTransitionTime":"2026-02-17T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.929010 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.929050 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.929060 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.929077 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:02 crc kubenswrapper[4680]: I0217 17:45:02.929088 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:02Z","lastTransitionTime":"2026-02-17T17:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.031604 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.031654 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.031663 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.031681 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.031689 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:03Z","lastTransitionTime":"2026-02-17T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.060469 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 23:40:16.538277088 +0000 UTC Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.105242 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.105305 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.105362 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:03 crc kubenswrapper[4680]: E0217 17:45:03.105457 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:03 crc kubenswrapper[4680]: E0217 17:45:03.105926 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:03 crc kubenswrapper[4680]: E0217 17:45:03.106030 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.134356 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.134419 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.134434 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.134459 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.134477 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:03Z","lastTransitionTime":"2026-02-17T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.236881 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.236908 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.236916 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.236928 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.236937 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:03Z","lastTransitionTime":"2026-02-17T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.341659 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.341746 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.341767 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.341788 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.341837 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:03Z","lastTransitionTime":"2026-02-17T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.344255 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovnkube-controller/1.log" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.348737 4680 scope.go:117] "RemoveContainer" containerID="2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a" Feb 17 17:45:03 crc kubenswrapper[4680]: E0217 17:45:03.348988 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.361998 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.373484 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.385793 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.395043 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.408216 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.418982 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.432832 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.443804 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.443832 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.443842 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.443858 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.443893 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:03Z","lastTransitionTime":"2026-02-17T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.445779 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.458778 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.471935 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.487513 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.503911 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:02Z\\\",\\\"message\\\":\\\"] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 17:45:02.054941 6033 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 17:45:02.054955 6033 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054964 6033 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054971 6033 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-t2c5v in node crc\\\\nI0217 17:45:02.054976 6033 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-t2c5v after 0 failed attempt(s)\\\\nI0217 17:45:02.054983 6033 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054991 6033 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0217 17:45:02.054997 6033 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0217 17:45:02.055002 6033 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0217 17:45:02.055001 6033 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.518195 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.539451 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.546366 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.546392 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.546401 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.546415 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.546422 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:03Z","lastTransitionTime":"2026-02-17T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.554077 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.652916 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.652959 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.652971 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.652994 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.653006 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:03Z","lastTransitionTime":"2026-02-17T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.754993 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.755031 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.755042 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.755056 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.755065 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:03Z","lastTransitionTime":"2026-02-17T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.797706 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb"] Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.798083 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.799728 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.800623 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.813417 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.828012 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.845287 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:02Z\\\",\\\"message\\\":\\\"] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 17:45:02.054941 6033 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 17:45:02.054955 6033 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054964 6033 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054971 6033 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-t2c5v in node crc\\\\nI0217 17:45:02.054976 6033 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-t2c5v after 0 failed attempt(s)\\\\nI0217 17:45:02.054983 6033 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054991 6033 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0217 17:45:02.054997 6033 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0217 17:45:02.055002 6033 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0217 17:45:02.055001 6033 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.857066 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.857139 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.857150 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.857179 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.857189 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:03Z","lastTransitionTime":"2026-02-17T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.859150 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.878363 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.891456 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.900502 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.913871 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.925298 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.935629 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.936261 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c2f672e-82df-4d1a-9ce7-a76595c08a27-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-knlwb\" (UID: \"0c2f672e-82df-4d1a-9ce7-a76595c08a27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.936465 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c2f672e-82df-4d1a-9ce7-a76595c08a27-env-overrides\") pod \"ovnkube-control-plane-749d76644c-knlwb\" (UID: \"0c2f672e-82df-4d1a-9ce7-a76595c08a27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.936572 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c2f672e-82df-4d1a-9ce7-a76595c08a27-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-knlwb\" (UID: \"0c2f672e-82df-4d1a-9ce7-a76595c08a27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.936698 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5pss\" (UniqueName: \"kubernetes.io/projected/0c2f672e-82df-4d1a-9ce7-a76595c08a27-kube-api-access-j5pss\") pod \"ovnkube-control-plane-749d76644c-knlwb\" (UID: \"0c2f672e-82df-4d1a-9ce7-a76595c08a27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.946894 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.957704 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.958961 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.958998 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.959009 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.959025 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.959036 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:03Z","lastTransitionTime":"2026-02-17T17:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.971241 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:03 crc kubenswrapper[4680]: I0217 17:45:03.983127 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:03Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.012257 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.037555 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5pss\" (UniqueName: \"kubernetes.io/projected/0c2f672e-82df-4d1a-9ce7-a76595c08a27-kube-api-access-j5pss\") pod \"ovnkube-control-plane-749d76644c-knlwb\" (UID: \"0c2f672e-82df-4d1a-9ce7-a76595c08a27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.037629 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c2f672e-82df-4d1a-9ce7-a76595c08a27-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-knlwb\" (UID: \"0c2f672e-82df-4d1a-9ce7-a76595c08a27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.037646 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c2f672e-82df-4d1a-9ce7-a76595c08a27-env-overrides\") pod \"ovnkube-control-plane-749d76644c-knlwb\" (UID: \"0c2f672e-82df-4d1a-9ce7-a76595c08a27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.037663 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c2f672e-82df-4d1a-9ce7-a76595c08a27-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-knlwb\" (UID: \"0c2f672e-82df-4d1a-9ce7-a76595c08a27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.038270 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c2f672e-82df-4d1a-9ce7-a76595c08a27-env-overrides\") pod \"ovnkube-control-plane-749d76644c-knlwb\" (UID: \"0c2f672e-82df-4d1a-9ce7-a76595c08a27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.038358 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c2f672e-82df-4d1a-9ce7-a76595c08a27-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-knlwb\" (UID: \"0c2f672e-82df-4d1a-9ce7-a76595c08a27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.043904 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c2f672e-82df-4d1a-9ce7-a76595c08a27-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-knlwb\" (UID: \"0c2f672e-82df-4d1a-9ce7-a76595c08a27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.057069 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5pss\" (UniqueName: \"kubernetes.io/projected/0c2f672e-82df-4d1a-9ce7-a76595c08a27-kube-api-access-j5pss\") pod \"ovnkube-control-plane-749d76644c-knlwb\" (UID: \"0c2f672e-82df-4d1a-9ce7-a76595c08a27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.060829 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:40:56.756349611 +0000 UTC Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.061542 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.061674 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.062169 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.062267 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.062383 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:04Z","lastTransitionTime":"2026-02-17T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.063155 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.109556 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.164464 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.164533 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.164548 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.164563 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.164573 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:04Z","lastTransitionTime":"2026-02-17T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.266074 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.266101 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.266112 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.266125 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.266134 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:04Z","lastTransitionTime":"2026-02-17T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.352265 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" event={"ID":"0c2f672e-82df-4d1a-9ce7-a76595c08a27","Type":"ContainerStarted","Data":"1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb"} Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.352336 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" event={"ID":"0c2f672e-82df-4d1a-9ce7-a76595c08a27","Type":"ContainerStarted","Data":"4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f"} Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.352351 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" event={"ID":"0c2f672e-82df-4d1a-9ce7-a76595c08a27","Type":"ContainerStarted","Data":"1ef50ed5b0dfeb5d314ae0d73f3fe9e9143cfaa246e8b70ce25fc5955651ebf0"} Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.365256 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.368078 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.368130 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.368140 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.368152 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.368161 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:04Z","lastTransitionTime":"2026-02-17T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.374214 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.385085 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.397303 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.410896 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.423466 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.448655 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.460707 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.473364 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.473400 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.473411 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.473426 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.473439 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:04Z","lastTransitionTime":"2026-02-17T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.477947 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.487544 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.500031 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.515863 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:02Z\\\",\\\"message\\\":\\\"] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 17:45:02.054941 6033 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 17:45:02.054955 6033 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054964 6033 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054971 6033 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-t2c5v in node crc\\\\nI0217 17:45:02.054976 6033 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-t2c5v after 0 failed attempt(s)\\\\nI0217 17:45:02.054983 6033 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054991 6033 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0217 17:45:02.054997 6033 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0217 17:45:02.055002 6033 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0217 17:45:02.055001 6033 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.530475 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.544341 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.554517 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.567523 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.576466 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.576508 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.576522 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.576540 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.576585 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:04Z","lastTransitionTime":"2026-02-17T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.679181 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.679217 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.679226 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.679240 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.679250 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:04Z","lastTransitionTime":"2026-02-17T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.781845 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.781885 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.781894 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.781907 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.781916 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:04Z","lastTransitionTime":"2026-02-17T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.884383 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.884417 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.884426 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.884443 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.884454 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:04Z","lastTransitionTime":"2026-02-17T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.899170 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-psx9f"] Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.899684 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:04 crc kubenswrapper[4680]: E0217 17:45:04.899767 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.914330 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.929064 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.943135 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.959712 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.981288 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:02Z\\\",\\\"message\\\":\\\"] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 17:45:02.054941 6033 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 17:45:02.054955 6033 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054964 6033 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054971 6033 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-t2c5v in node crc\\\\nI0217 17:45:02.054976 6033 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-t2c5v after 0 failed attempt(s)\\\\nI0217 17:45:02.054983 6033 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054991 6033 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0217 17:45:02.054997 6033 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0217 17:45:02.055002 6033 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0217 17:45:02.055001 6033 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.985907 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.985944 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.985956 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.985971 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.985985 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:04Z","lastTransitionTime":"2026-02-17T17:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:04 crc kubenswrapper[4680]: I0217 17:45:04.994701 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:04Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.017914 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:05Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.032874 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:05Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.045555 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:05Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.048050 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs\") pod \"network-metrics-daemon-psx9f\" (UID: \"1270b10e-8243-4940-b8a1-e0d36a0ef933\") " pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.048232 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66m8l\" (UniqueName: \"kubernetes.io/projected/1270b10e-8243-4940-b8a1-e0d36a0ef933-kube-api-access-66m8l\") pod \"network-metrics-daemon-psx9f\" (UID: \"1270b10e-8243-4940-b8a1-e0d36a0ef933\") " pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.058870 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:05Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.062059 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 11:19:02.857761072 +0000 UTC Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.073362 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:05Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.086821 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:05Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.088758 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.088787 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.088796 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.088809 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.088820 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:05Z","lastTransitionTime":"2026-02-17T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.099467 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:05Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.104756 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.104838 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:05 crc kubenswrapper[4680]: E0217 17:45:05.104863 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:05 crc kubenswrapper[4680]: E0217 17:45:05.104967 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.105153 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:05 crc kubenswrapper[4680]: E0217 17:45:05.105446 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.110099 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:05Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.121670 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-psx9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1270b10e-8243-4940-b8a1-e0d36a0ef933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-psx9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:05Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.136130 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:05Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.145920 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:05Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.149367 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs\") pod \"network-metrics-daemon-psx9f\" (UID: \"1270b10e-8243-4940-b8a1-e0d36a0ef933\") " pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:05 crc kubenswrapper[4680]: E0217 17:45:05.149564 4680 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 17:45:05 crc kubenswrapper[4680]: E0217 17:45:05.149647 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs podName:1270b10e-8243-4940-b8a1-e0d36a0ef933 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:05.649622425 +0000 UTC m=+35.200521164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs") pod "network-metrics-daemon-psx9f" (UID: "1270b10e-8243-4940-b8a1-e0d36a0ef933") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.149737 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66m8l\" (UniqueName: \"kubernetes.io/projected/1270b10e-8243-4940-b8a1-e0d36a0ef933-kube-api-access-66m8l\") pod \"network-metrics-daemon-psx9f\" (UID: \"1270b10e-8243-4940-b8a1-e0d36a0ef933\") " pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.167139 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66m8l\" (UniqueName: \"kubernetes.io/projected/1270b10e-8243-4940-b8a1-e0d36a0ef933-kube-api-access-66m8l\") pod \"network-metrics-daemon-psx9f\" (UID: \"1270b10e-8243-4940-b8a1-e0d36a0ef933\") " pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.191642 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.191935 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.192017 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.192101 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.192158 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:05Z","lastTransitionTime":"2026-02-17T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.294802 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.295104 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.295223 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.295359 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.295487 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:05Z","lastTransitionTime":"2026-02-17T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.397862 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.398006 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.398025 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.398040 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.398051 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:05Z","lastTransitionTime":"2026-02-17T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.501375 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.501494 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.501516 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.501539 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.501556 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:05Z","lastTransitionTime":"2026-02-17T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.603927 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.604280 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.604432 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.604578 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.604684 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:05Z","lastTransitionTime":"2026-02-17T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.656308 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs\") pod \"network-metrics-daemon-psx9f\" (UID: \"1270b10e-8243-4940-b8a1-e0d36a0ef933\") " pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:05 crc kubenswrapper[4680]: E0217 17:45:05.656482 4680 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 17:45:05 crc kubenswrapper[4680]: E0217 17:45:05.656547 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs podName:1270b10e-8243-4940-b8a1-e0d36a0ef933 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:06.656532885 +0000 UTC m=+36.207431564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs") pod "network-metrics-daemon-psx9f" (UID: "1270b10e-8243-4940-b8a1-e0d36a0ef933") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.709489 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.709925 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.709934 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.709953 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.709963 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:05Z","lastTransitionTime":"2026-02-17T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.812015 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.812239 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.812397 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.812589 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.812640 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:05Z","lastTransitionTime":"2026-02-17T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.916095 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.916137 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.916147 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.916161 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.916171 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:05Z","lastTransitionTime":"2026-02-17T17:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:05 crc kubenswrapper[4680]: I0217 17:45:05.959023 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:45:05 crc kubenswrapper[4680]: E0217 17:45:05.959160 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:45:21.95914413 +0000 UTC m=+51.510042809 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.018928 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.018966 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.018978 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.018993 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.019005 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:06Z","lastTransitionTime":"2026-02-17T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.060633 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.060679 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.060700 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.060730 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:06 crc kubenswrapper[4680]: E0217 17:45:06.060810 4680 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 17:45:06 crc kubenswrapper[4680]: E0217 17:45:06.060861 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:22.060848527 +0000 UTC m=+51.611747206 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 17:45:06 crc kubenswrapper[4680]: E0217 17:45:06.060868 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 17:45:06 crc kubenswrapper[4680]: E0217 17:45:06.060905 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 17:45:06 crc kubenswrapper[4680]: E0217 17:45:06.060918 4680 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:45:06 crc kubenswrapper[4680]: E0217 17:45:06.060967 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:22.06095144 +0000 UTC m=+51.611850109 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:45:06 crc kubenswrapper[4680]: E0217 17:45:06.060868 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 17:45:06 crc kubenswrapper[4680]: E0217 17:45:06.060986 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 17:45:06 crc kubenswrapper[4680]: E0217 17:45:06.060995 4680 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:45:06 crc kubenswrapper[4680]: E0217 17:45:06.061021 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:22.061015242 +0000 UTC m=+51.611913911 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:45:06 crc kubenswrapper[4680]: E0217 17:45:06.061058 4680 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 17:45:06 crc kubenswrapper[4680]: E0217 17:45:06.061139 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:22.061119846 +0000 UTC m=+51.612018555 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.063483 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 23:48:00.3627071 +0000 UTC Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.121231 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.121275 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.121287 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.121303 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.121330 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:06Z","lastTransitionTime":"2026-02-17T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.223279 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.223311 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.223337 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.223350 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.223358 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:06Z","lastTransitionTime":"2026-02-17T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.326151 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.326227 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.326248 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.326273 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.326291 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:06Z","lastTransitionTime":"2026-02-17T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.429240 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.429311 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.429372 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.429399 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.429421 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:06Z","lastTransitionTime":"2026-02-17T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.532018 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.532090 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.532114 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.532142 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.532163 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:06Z","lastTransitionTime":"2026-02-17T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.635731 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.635785 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.635806 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.635830 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.635847 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:06Z","lastTransitionTime":"2026-02-17T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.666576 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs\") pod \"network-metrics-daemon-psx9f\" (UID: \"1270b10e-8243-4940-b8a1-e0d36a0ef933\") " pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:06 crc kubenswrapper[4680]: E0217 17:45:06.666798 4680 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 17:45:06 crc kubenswrapper[4680]: E0217 17:45:06.666885 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs podName:1270b10e-8243-4940-b8a1-e0d36a0ef933 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:08.666862434 +0000 UTC m=+38.217761143 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs") pod "network-metrics-daemon-psx9f" (UID: "1270b10e-8243-4940-b8a1-e0d36a0ef933") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.738815 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.738891 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.738919 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.738947 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.738969 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:06Z","lastTransitionTime":"2026-02-17T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.841579 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.841633 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.841650 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.841674 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.841690 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:06Z","lastTransitionTime":"2026-02-17T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.943831 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.943875 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.943896 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.943914 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:06 crc kubenswrapper[4680]: I0217 17:45:06.943926 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:06Z","lastTransitionTime":"2026-02-17T17:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.046622 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.046688 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.046700 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.046717 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.046728 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:07Z","lastTransitionTime":"2026-02-17T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.064086 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 16:05:45.85351675 +0000 UTC Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.104828 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.104973 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:07 crc kubenswrapper[4680]: E0217 17:45:07.105650 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.105176 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.105018 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:07 crc kubenswrapper[4680]: E0217 17:45:07.105973 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:07 crc kubenswrapper[4680]: E0217 17:45:07.105459 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:07 crc kubenswrapper[4680]: E0217 17:45:07.105803 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.149954 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.150229 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.150407 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.150568 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.150796 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:07Z","lastTransitionTime":"2026-02-17T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.252728 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.253136 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.253295 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.253479 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.253722 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:07Z","lastTransitionTime":"2026-02-17T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.356422 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.356694 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.356784 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.356886 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.357009 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:07Z","lastTransitionTime":"2026-02-17T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.460131 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.460602 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.460672 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.460751 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.460831 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:07Z","lastTransitionTime":"2026-02-17T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.563266 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.563541 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.563620 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.563717 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.563805 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:07Z","lastTransitionTime":"2026-02-17T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.665982 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.666362 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.666507 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.666672 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.666815 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:07Z","lastTransitionTime":"2026-02-17T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.772588 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.772667 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.772684 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.772706 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.772724 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:07Z","lastTransitionTime":"2026-02-17T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.876470 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.876514 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.876528 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.876543 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.876555 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:07Z","lastTransitionTime":"2026-02-17T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.979267 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.979304 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.979337 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.979353 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:07 crc kubenswrapper[4680]: I0217 17:45:07.979364 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:07Z","lastTransitionTime":"2026-02-17T17:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.064899 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 01:40:41.370229501 +0000 UTC Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.081680 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.081710 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.081718 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.081731 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.081742 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:08Z","lastTransitionTime":"2026-02-17T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.184726 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.184782 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.184803 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.184830 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.184851 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:08Z","lastTransitionTime":"2026-02-17T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.288077 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.288530 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.288727 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.288925 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.289116 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:08Z","lastTransitionTime":"2026-02-17T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.391797 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.391848 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.391863 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.391885 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.391901 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:08Z","lastTransitionTime":"2026-02-17T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.494990 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.495048 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.495060 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.495077 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.495090 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:08Z","lastTransitionTime":"2026-02-17T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.597087 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.597133 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.597146 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.597162 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.597173 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:08Z","lastTransitionTime":"2026-02-17T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.683808 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs\") pod \"network-metrics-daemon-psx9f\" (UID: \"1270b10e-8243-4940-b8a1-e0d36a0ef933\") " pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:08 crc kubenswrapper[4680]: E0217 17:45:08.683940 4680 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 17:45:08 crc kubenswrapper[4680]: E0217 17:45:08.684018 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs podName:1270b10e-8243-4940-b8a1-e0d36a0ef933 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:12.68400058 +0000 UTC m=+42.234899259 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs") pod "network-metrics-daemon-psx9f" (UID: "1270b10e-8243-4940-b8a1-e0d36a0ef933") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.699100 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.699362 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.699457 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.699551 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.699647 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:08Z","lastTransitionTime":"2026-02-17T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.803284 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.803708 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.803900 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.804064 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.804195 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:08Z","lastTransitionTime":"2026-02-17T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.907021 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.907080 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.907097 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.907121 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:08 crc kubenswrapper[4680]: I0217 17:45:08.907142 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:08Z","lastTransitionTime":"2026-02-17T17:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.009856 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.009904 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.009922 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.009948 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.009966 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:09Z","lastTransitionTime":"2026-02-17T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.065997 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 07:02:30.567433558 +0000 UTC Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.104182 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.104551 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:09 crc kubenswrapper[4680]: E0217 17:45:09.104702 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.105122 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.105169 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:09 crc kubenswrapper[4680]: E0217 17:45:09.105254 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:09 crc kubenswrapper[4680]: E0217 17:45:09.105446 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:09 crc kubenswrapper[4680]: E0217 17:45:09.105506 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.111463 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.111494 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.111502 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.111522 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.111531 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:09Z","lastTransitionTime":"2026-02-17T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.213750 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.213795 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.213826 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.213842 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.213854 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:09Z","lastTransitionTime":"2026-02-17T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.316252 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.316632 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.316770 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.316907 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.317030 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:09Z","lastTransitionTime":"2026-02-17T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.419827 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.419865 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.419874 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.419892 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.419910 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:09Z","lastTransitionTime":"2026-02-17T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.521826 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.522272 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.522485 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.522827 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.522930 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:09Z","lastTransitionTime":"2026-02-17T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.625684 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.625736 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.625758 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.625780 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.625797 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:09Z","lastTransitionTime":"2026-02-17T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.727967 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.728003 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.728012 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.728025 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.728035 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:09Z","lastTransitionTime":"2026-02-17T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.830458 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.830501 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.830512 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.830534 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.830545 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:09Z","lastTransitionTime":"2026-02-17T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.933080 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.933137 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.933154 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.933170 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:09 crc kubenswrapper[4680]: I0217 17:45:09.933178 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:09Z","lastTransitionTime":"2026-02-17T17:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.035086 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.035139 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.035151 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.035168 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.035180 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:10Z","lastTransitionTime":"2026-02-17T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.066504 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 09:53:31.23246077 +0000 UTC Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.136960 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.137032 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.137057 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.137088 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.137110 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:10Z","lastTransitionTime":"2026-02-17T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.239387 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.239425 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.239434 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.239448 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.239458 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:10Z","lastTransitionTime":"2026-02-17T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.341642 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.341933 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.342035 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.342114 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.342214 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:10Z","lastTransitionTime":"2026-02-17T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.445107 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.445155 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.445171 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.445191 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.445206 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:10Z","lastTransitionTime":"2026-02-17T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.547283 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.547543 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.547633 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.547712 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.547782 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:10Z","lastTransitionTime":"2026-02-17T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.650685 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.650744 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.650754 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.650768 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.650777 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:10Z","lastTransitionTime":"2026-02-17T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.745838 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.746150 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.746230 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.746343 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.746435 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:10Z","lastTransitionTime":"2026-02-17T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:10 crc kubenswrapper[4680]: E0217 17:45:10.760491 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:10Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.764022 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.764254 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.764368 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.764454 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.764538 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:10Z","lastTransitionTime":"2026-02-17T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:10 crc kubenswrapper[4680]: E0217 17:45:10.775951 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:10Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.779016 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.779070 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.779080 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.779094 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.779104 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:10Z","lastTransitionTime":"2026-02-17T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:10 crc kubenswrapper[4680]: E0217 17:45:10.790222 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:10Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.793054 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.793084 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.793096 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.793110 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.793119 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:10Z","lastTransitionTime":"2026-02-17T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:10 crc kubenswrapper[4680]: E0217 17:45:10.804516 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:10Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.807708 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.807823 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.807925 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.808022 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.808109 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:10Z","lastTransitionTime":"2026-02-17T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:10 crc kubenswrapper[4680]: E0217 17:45:10.820583 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:10Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:10 crc kubenswrapper[4680]: E0217 17:45:10.820690 4680 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.822704 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.822812 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.822822 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.822836 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.822845 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:10Z","lastTransitionTime":"2026-02-17T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.924830 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.924859 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.924866 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.924877 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:10 crc kubenswrapper[4680]: I0217 17:45:10.924886 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:10Z","lastTransitionTime":"2026-02-17T17:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.027047 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.027088 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.027099 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.027115 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.027126 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:11Z","lastTransitionTime":"2026-02-17T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.067378 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:33:58.413719312 +0000 UTC Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.104691 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:11 crc kubenswrapper[4680]: E0217 17:45:11.104792 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.104963 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:11 crc kubenswrapper[4680]: E0217 17:45:11.105009 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.105045 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.105064 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:11 crc kubenswrapper[4680]: E0217 17:45:11.105081 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:11 crc kubenswrapper[4680]: E0217 17:45:11.105165 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.120809 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:11Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.129577 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.129816 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.129943 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.130048 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.130141 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:11Z","lastTransitionTime":"2026-02-17T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.135670 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:11Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.145850 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:11Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.155899 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:11Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.170510 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:11Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.183972 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:11Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.200426 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:02Z\\\",\\\"message\\\":\\\"] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 17:45:02.054941 6033 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 17:45:02.054955 6033 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054964 6033 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054971 6033 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-t2c5v in node crc\\\\nI0217 17:45:02.054976 6033 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-t2c5v after 0 failed attempt(s)\\\\nI0217 17:45:02.054983 6033 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054991 6033 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0217 17:45:02.054997 6033 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0217 17:45:02.055002 6033 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0217 17:45:02.055001 6033 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:11Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.211622 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:11Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.230271 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:11Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.232110 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.232133 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.232141 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.232153 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.232161 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:11Z","lastTransitionTime":"2026-02-17T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.244754 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:11Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.255012 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:11Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.267223 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:11Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.280065 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:11Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.290239 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:11Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.300599 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-psx9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1270b10e-8243-4940-b8a1-e0d36a0ef933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-psx9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:11Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.310190 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:11Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.320391 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:11Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.333918 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.333956 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.333998 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.334013 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.334023 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:11Z","lastTransitionTime":"2026-02-17T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.436500 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.436539 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.436556 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.436573 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.436583 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:11Z","lastTransitionTime":"2026-02-17T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.538734 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.538775 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.538786 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.538804 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.538816 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:11Z","lastTransitionTime":"2026-02-17T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.640904 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.640953 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.641000 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.641018 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.641030 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:11Z","lastTransitionTime":"2026-02-17T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.743252 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.743283 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.743291 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.743305 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.743329 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:11Z","lastTransitionTime":"2026-02-17T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.845243 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.845267 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.845275 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.845287 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.845296 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:11Z","lastTransitionTime":"2026-02-17T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.947448 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.947476 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.947485 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.947497 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:11 crc kubenswrapper[4680]: I0217 17:45:11.947505 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:11Z","lastTransitionTime":"2026-02-17T17:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.050800 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.050842 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.050855 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.050870 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.050883 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:12Z","lastTransitionTime":"2026-02-17T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.067916 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:28:42.545524891 +0000 UTC Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.153264 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.153353 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.153373 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.153394 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.153412 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:12Z","lastTransitionTime":"2026-02-17T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.255535 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.255566 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.255580 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.255597 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.255623 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:12Z","lastTransitionTime":"2026-02-17T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.358005 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.358042 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.358054 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.358071 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.358080 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:12Z","lastTransitionTime":"2026-02-17T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.460075 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.460102 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.460110 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.460122 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.460131 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:12Z","lastTransitionTime":"2026-02-17T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.562406 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.562441 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.562453 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.562468 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.562480 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:12Z","lastTransitionTime":"2026-02-17T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.664430 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.664477 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.664490 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.664506 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.664517 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:12Z","lastTransitionTime":"2026-02-17T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.730694 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs\") pod \"network-metrics-daemon-psx9f\" (UID: \"1270b10e-8243-4940-b8a1-e0d36a0ef933\") " pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:12 crc kubenswrapper[4680]: E0217 17:45:12.730894 4680 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 17:45:12 crc kubenswrapper[4680]: E0217 17:45:12.731015 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs podName:1270b10e-8243-4940-b8a1-e0d36a0ef933 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:20.730982703 +0000 UTC m=+50.281881422 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs") pod "network-metrics-daemon-psx9f" (UID: "1270b10e-8243-4940-b8a1-e0d36a0ef933") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.766750 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.766795 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.766806 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.766821 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.766832 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:12Z","lastTransitionTime":"2026-02-17T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.870486 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.870559 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.870585 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.870614 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.870642 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:12Z","lastTransitionTime":"2026-02-17T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.973490 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.973541 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.973553 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.973570 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:12 crc kubenswrapper[4680]: I0217 17:45:12.973583 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:12Z","lastTransitionTime":"2026-02-17T17:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.068486 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 23:00:58.802798366 +0000 UTC Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.076101 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.076150 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.076162 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.076180 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.076192 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:13Z","lastTransitionTime":"2026-02-17T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.104948 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.105045 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.105064 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.105069 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:13 crc kubenswrapper[4680]: E0217 17:45:13.105348 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:13 crc kubenswrapper[4680]: E0217 17:45:13.105573 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:13 crc kubenswrapper[4680]: E0217 17:45:13.105657 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:13 crc kubenswrapper[4680]: E0217 17:45:13.105735 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.179683 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.180196 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.180574 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.180811 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.181005 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:13Z","lastTransitionTime":"2026-02-17T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.283510 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.283796 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.283951 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.284046 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.284136 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:13Z","lastTransitionTime":"2026-02-17T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.386159 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.386488 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.386628 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.386723 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.386815 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:13Z","lastTransitionTime":"2026-02-17T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.489046 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.489086 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.489097 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.489115 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.489124 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:13Z","lastTransitionTime":"2026-02-17T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.591579 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.591622 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.591633 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.591649 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.591660 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:13Z","lastTransitionTime":"2026-02-17T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.693850 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.693887 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.693897 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.693912 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.693922 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:13Z","lastTransitionTime":"2026-02-17T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.796568 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.796611 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.796625 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.796646 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.796661 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:13Z","lastTransitionTime":"2026-02-17T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.899619 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.899645 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.899653 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.899667 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:13 crc kubenswrapper[4680]: I0217 17:45:13.899676 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:13Z","lastTransitionTime":"2026-02-17T17:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.001993 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.002037 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.002054 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.002075 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.002091 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:14Z","lastTransitionTime":"2026-02-17T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.069083 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 12:52:03.569854234 +0000 UTC Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.104738 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.104800 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.104817 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.105254 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.105300 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:14Z","lastTransitionTime":"2026-02-17T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.207999 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.208049 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.208061 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.208077 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.208089 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:14Z","lastTransitionTime":"2026-02-17T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.310377 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.310408 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.310417 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.310430 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.310438 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:14Z","lastTransitionTime":"2026-02-17T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.413160 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.413195 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.413206 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.413220 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.413231 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:14Z","lastTransitionTime":"2026-02-17T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.515369 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.515612 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.515710 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.515812 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.515903 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:14Z","lastTransitionTime":"2026-02-17T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.618174 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.618244 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.618266 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.618295 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.618346 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:14Z","lastTransitionTime":"2026-02-17T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.721227 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.721271 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.721295 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.721339 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.721359 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:14Z","lastTransitionTime":"2026-02-17T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.823604 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.823651 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.823665 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.823683 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.823696 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:14Z","lastTransitionTime":"2026-02-17T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.926383 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.926491 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.926525 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.926549 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:14 crc kubenswrapper[4680]: I0217 17:45:14.926567 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:14Z","lastTransitionTime":"2026-02-17T17:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.029776 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.029832 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.029844 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.029859 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.029870 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:15Z","lastTransitionTime":"2026-02-17T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.070095 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 12:36:13.759910644 +0000 UTC Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.104707 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.104758 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:15 crc kubenswrapper[4680]: E0217 17:45:15.104949 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.105095 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:15 crc kubenswrapper[4680]: E0217 17:45:15.105446 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.105513 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:15 crc kubenswrapper[4680]: E0217 17:45:15.105645 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:15 crc kubenswrapper[4680]: E0217 17:45:15.106528 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.107013 4680 scope.go:117] "RemoveContainer" containerID="2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.134881 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.135936 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.135971 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.135991 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.136209 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:15Z","lastTransitionTime":"2026-02-17T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.240228 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.240359 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.240421 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.240514 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.240598 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:15Z","lastTransitionTime":"2026-02-17T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.342484 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.342523 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.342533 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.342547 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.342555 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:15Z","lastTransitionTime":"2026-02-17T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.382928 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovnkube-controller/1.log" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.385375 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerStarted","Data":"1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1"} Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.386438 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.398267 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:15Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.408824 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:15Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.420180 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:15Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.432073 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:15Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.445340 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.445401 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.445411 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.445425 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.445433 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:15Z","lastTransitionTime":"2026-02-17T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.449971 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:15Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.460759 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:15Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.469825 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:15Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.479227 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:15Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.491418 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:15Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.517888 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:02Z\\\",\\\"message\\\":\\\"] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 17:45:02.054941 6033 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 17:45:02.054955 6033 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054964 6033 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054971 6033 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-t2c5v in node crc\\\\nI0217 17:45:02.054976 6033 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-t2c5v after 0 failed attempt(s)\\\\nI0217 17:45:02.054983 6033 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054991 6033 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0217 17:45:02.054997 6033 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0217 17:45:02.055002 6033 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0217 17:45:02.055001 6033 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:15Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.529236 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:15Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.546253 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:15Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.547682 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.547712 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.547723 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.547738 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.547749 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:15Z","lastTransitionTime":"2026-02-17T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.559118 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:15Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.623199 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:15Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.637063 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-psx9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1270b10e-8243-4940-b8a1-e0d36a0ef933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-psx9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:15Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.649976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.650001 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.650009 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.650021 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.650029 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:15Z","lastTransitionTime":"2026-02-17T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.650767 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:15Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.662354 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:15Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.752396 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.752429 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.752437 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.752451 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.752462 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:15Z","lastTransitionTime":"2026-02-17T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.854503 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.854545 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.854556 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.854572 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.854583 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:15Z","lastTransitionTime":"2026-02-17T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.957176 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.957212 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.957221 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.957235 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:15 crc kubenswrapper[4680]: I0217 17:45:15.957245 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:15Z","lastTransitionTime":"2026-02-17T17:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.064163 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.064205 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.064214 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.064227 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.064236 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:16Z","lastTransitionTime":"2026-02-17T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.070635 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 03:05:40.545309058 +0000 UTC Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.166362 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.166402 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.166413 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.166427 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.166437 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:16Z","lastTransitionTime":"2026-02-17T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.269548 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.269584 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.269592 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.269605 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.269614 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:16Z","lastTransitionTime":"2026-02-17T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.371943 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.371979 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.371987 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.372018 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.372028 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:16Z","lastTransitionTime":"2026-02-17T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.390380 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovnkube-controller/2.log" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.391248 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovnkube-controller/1.log" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.394880 4680 generic.go:334] "Generic (PLEG): container finished" podID="2733b1c1-6f98-4184-b59b-123435eeb569" containerID="1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1" exitCode=1 Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.394937 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerDied","Data":"1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1"} Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.395065 4680 scope.go:117] "RemoveContainer" containerID="2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.396941 4680 scope.go:117] "RemoveContainer" containerID="1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1" Feb 17 17:45:16 crc kubenswrapper[4680]: E0217 17:45:16.397205 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.413599 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:16Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.422469 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:16Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.435036 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:16Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.447159 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:16Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.458654 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:16Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.469943 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:16Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.474100 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.474134 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.474146 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.474163 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.474176 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:16Z","lastTransitionTime":"2026-02-17T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.480688 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:16Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.492972 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:16Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.509534 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa24429b9b6ae98376118beec69f6a705f06cc4e447acfbf6aa23d10272b57a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:02Z\\\",\\\"message\\\":\\\"] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 17:45:02.054941 6033 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 17:45:02.054955 6033 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054964 6033 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054971 6033 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-t2c5v in node crc\\\\nI0217 17:45:02.054976 6033 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-t2c5v after 0 failed attempt(s)\\\\nI0217 17:45:02.054983 6033 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-t2c5v\\\\nI0217 17:45:02.054991 6033 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0217 17:45:02.054997 6033 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0217 17:45:02.055002 6033 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0217 17:45:02.055001 6033 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:16Z\\\",\\\"message\\\":\\\"03] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004754 6239 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004762 6239 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0217 17:45:16.004768 6239 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0217 17:45:16.004772 6239 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004733 6239 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0217 17:45:16.004783 6239 services_controller.go:453] Built service openshift-machine-api/machine-api-controllers template LB for network=default: []services.LB{}\\\\nF0217 17:45:16.004785 6239 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to sta\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:16Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.521733 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:16Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.541218 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:16Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.552155 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:16Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.561766 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:16Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.573750 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:16Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.576142 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.576208 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.576221 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.576235 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.576264 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:16Z","lastTransitionTime":"2026-02-17T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.585489 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:16Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.598462 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:16Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.608704 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-psx9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1270b10e-8243-4940-b8a1-e0d36a0ef933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-psx9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:16Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.678791 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.678820 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.678827 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.678839 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.678847 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:16Z","lastTransitionTime":"2026-02-17T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.780915 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.780941 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.780950 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.780962 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.780970 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:16Z","lastTransitionTime":"2026-02-17T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.883018 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.883057 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.883066 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.883080 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.883090 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:16Z","lastTransitionTime":"2026-02-17T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.985026 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.985063 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.985073 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.985088 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:16 crc kubenswrapper[4680]: I0217 17:45:16.985096 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:16Z","lastTransitionTime":"2026-02-17T17:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.071524 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:33:03.183305275 +0000 UTC Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.088480 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.088527 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.088546 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.088563 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.088576 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:17Z","lastTransitionTime":"2026-02-17T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.104813 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.104858 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.104898 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:17 crc kubenswrapper[4680]: E0217 17:45:17.104926 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:17 crc kubenswrapper[4680]: E0217 17:45:17.105009 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:17 crc kubenswrapper[4680]: E0217 17:45:17.105103 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.105123 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:17 crc kubenswrapper[4680]: E0217 17:45:17.105248 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.190248 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.190282 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.190290 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.190303 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.190338 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:17Z","lastTransitionTime":"2026-02-17T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.292102 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.292133 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.292141 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.292152 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.292160 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:17Z","lastTransitionTime":"2026-02-17T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.394941 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.394965 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.394973 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.394984 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.394993 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:17Z","lastTransitionTime":"2026-02-17T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.400114 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovnkube-controller/2.log" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.405416 4680 scope.go:117] "RemoveContainer" containerID="1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1" Feb 17 17:45:17 crc kubenswrapper[4680]: E0217 17:45:17.405665 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.425965 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:17Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.441047 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:17Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.454874 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:17Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.467754 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:17Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.484414 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:17Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.497656 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.497691 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.497699 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.497715 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.497725 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:17Z","lastTransitionTime":"2026-02-17T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.506680 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:16Z\\\",\\\"message\\\":\\\"03] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004754 6239 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004762 6239 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0217 17:45:16.004768 6239 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0217 17:45:16.004772 6239 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004733 6239 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0217 17:45:16.004783 6239 services_controller.go:453] Built service openshift-machine-api/machine-api-controllers template LB for network=default: []services.LB{}\\\\nF0217 17:45:16.004785 6239 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to sta\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:17Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.518296 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:17Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.539379 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:17Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.551531 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:17Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.562189 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:17Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.572987 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:17Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.585757 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:17Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.600250 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.600280 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.600292 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.600305 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.600330 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:17Z","lastTransitionTime":"2026-02-17T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.615205 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:17Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.651845 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:17Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.668640 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-psx9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1270b10e-8243-4940-b8a1-e0d36a0ef933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-psx9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:17Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.679723 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:17Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.689159 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:17Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.702844 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.702993 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.703099 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.703225 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.703298 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:17Z","lastTransitionTime":"2026-02-17T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.805235 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.805524 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.805618 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.805716 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.805801 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:17Z","lastTransitionTime":"2026-02-17T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.908476 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.908695 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.908774 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.908861 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:17 crc kubenswrapper[4680]: I0217 17:45:17.908947 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:17Z","lastTransitionTime":"2026-02-17T17:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.011025 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.011055 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.011067 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.011082 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.011092 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:18Z","lastTransitionTime":"2026-02-17T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.072377 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:27:23.913844492 +0000 UTC Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.114427 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.114500 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.114523 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.114551 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.114572 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:18Z","lastTransitionTime":"2026-02-17T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.216607 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.216647 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.216660 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.216677 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.216688 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:18Z","lastTransitionTime":"2026-02-17T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.318667 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.318890 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.318953 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.319283 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.319396 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:18Z","lastTransitionTime":"2026-02-17T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.421635 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.421867 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.422027 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.422294 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.422573 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:18Z","lastTransitionTime":"2026-02-17T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.525394 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.525668 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.525729 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.525810 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.525867 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:18Z","lastTransitionTime":"2026-02-17T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.627807 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.628352 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.628437 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.628528 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.628601 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:18Z","lastTransitionTime":"2026-02-17T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.732227 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.732280 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.732297 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.732347 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.732364 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:18Z","lastTransitionTime":"2026-02-17T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.834189 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.834226 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.834238 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.834253 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.834264 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:18Z","lastTransitionTime":"2026-02-17T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.936232 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.936265 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.936273 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.936288 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:18 crc kubenswrapper[4680]: I0217 17:45:18.936298 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:18Z","lastTransitionTime":"2026-02-17T17:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.039200 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.039234 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.039246 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.039258 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.039267 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:19Z","lastTransitionTime":"2026-02-17T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.073031 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 08:31:46.621550028 +0000 UTC Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.104691 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.104736 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.104736 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:19 crc kubenswrapper[4680]: E0217 17:45:19.104808 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.104827 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:19 crc kubenswrapper[4680]: E0217 17:45:19.104917 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:19 crc kubenswrapper[4680]: E0217 17:45:19.105002 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:19 crc kubenswrapper[4680]: E0217 17:45:19.105061 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.142212 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.142249 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.142260 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.142275 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.142286 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:19Z","lastTransitionTime":"2026-02-17T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.244611 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.244673 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.244691 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.244717 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.244735 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:19Z","lastTransitionTime":"2026-02-17T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.347274 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.347556 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.347649 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.347740 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.347825 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:19Z","lastTransitionTime":"2026-02-17T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.449740 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.449772 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.449780 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.449792 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.449801 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:19Z","lastTransitionTime":"2026-02-17T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.552723 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.552760 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.552768 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.552782 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.552792 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:19Z","lastTransitionTime":"2026-02-17T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.654635 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.654669 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.654678 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.654692 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.654701 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:19Z","lastTransitionTime":"2026-02-17T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.756717 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.756754 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.756762 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.756775 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.756786 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:19Z","lastTransitionTime":"2026-02-17T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.859157 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.859397 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.859503 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.859574 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.859651 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:19Z","lastTransitionTime":"2026-02-17T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.961859 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.961926 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.961935 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.961949 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:19 crc kubenswrapper[4680]: I0217 17:45:19.961960 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:19Z","lastTransitionTime":"2026-02-17T17:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.064652 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.064685 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.064694 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.064709 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.064720 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:20Z","lastTransitionTime":"2026-02-17T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.073655 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 00:56:35.160890766 +0000 UTC Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.166608 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.166938 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.167049 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.167176 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.167284 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:20Z","lastTransitionTime":"2026-02-17T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.270268 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.270606 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.270804 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.270931 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.271060 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:20Z","lastTransitionTime":"2026-02-17T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.373611 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.373649 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.373661 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.373679 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.373691 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:20Z","lastTransitionTime":"2026-02-17T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.475917 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.475969 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.475977 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.475989 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.475999 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:20Z","lastTransitionTime":"2026-02-17T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.578367 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.578399 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.578425 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.578437 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.578448 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:20Z","lastTransitionTime":"2026-02-17T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.680820 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.680859 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.680871 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.680887 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.680900 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:20Z","lastTransitionTime":"2026-02-17T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.783741 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.783789 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.783797 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.783811 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.783820 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:20Z","lastTransitionTime":"2026-02-17T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.812258 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs\") pod \"network-metrics-daemon-psx9f\" (UID: \"1270b10e-8243-4940-b8a1-e0d36a0ef933\") " pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:20 crc kubenswrapper[4680]: E0217 17:45:20.812423 4680 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 17:45:20 crc kubenswrapper[4680]: E0217 17:45:20.812532 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs podName:1270b10e-8243-4940-b8a1-e0d36a0ef933 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:36.812512178 +0000 UTC m=+66.363410857 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs") pod "network-metrics-daemon-psx9f" (UID: "1270b10e-8243-4940-b8a1-e0d36a0ef933") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.885830 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.885867 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.885876 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.885890 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.885900 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:20Z","lastTransitionTime":"2026-02-17T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.988077 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.988110 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.988118 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.988149 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:20 crc kubenswrapper[4680]: I0217 17:45:20.988159 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:20Z","lastTransitionTime":"2026-02-17T17:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.074715 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 00:58:54.160682178 +0000 UTC Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.090462 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.090527 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.090549 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.090583 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.090621 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:21Z","lastTransitionTime":"2026-02-17T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.104732 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:21 crc kubenswrapper[4680]: E0217 17:45:21.104905 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.105123 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.105144 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:21 crc kubenswrapper[4680]: E0217 17:45:21.105267 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.105252 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:21 crc kubenswrapper[4680]: E0217 17:45:21.105470 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:21 crc kubenswrapper[4680]: E0217 17:45:21.105637 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.116844 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.117081 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.117147 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.117216 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.117283 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:21Z","lastTransitionTime":"2026-02-17T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.126822 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: E0217 17:45:21.131741 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.136093 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.136136 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.136147 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.136164 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.136176 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:21Z","lastTransitionTime":"2026-02-17T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.137545 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.149951 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: E0217 17:45:21.152668 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.157103 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.157161 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.157175 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.157194 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.157209 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:21Z","lastTransitionTime":"2026-02-17T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.172863 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: E0217 17:45:21.177044 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.181076 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.181098 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.181106 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.181120 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.181131 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:21Z","lastTransitionTime":"2026-02-17T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:21 crc kubenswrapper[4680]: E0217 17:45:21.194851 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.195617 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:16Z\\\",\\\"message\\\":\\\"03] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004754 6239 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004762 6239 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0217 17:45:16.004768 6239 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0217 17:45:16.004772 6239 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004733 6239 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0217 17:45:16.004783 6239 services_controller.go:453] Built service openshift-machine-api/machine-api-controllers template LB for network=default: []services.LB{}\\\\nF0217 17:45:16.004785 6239 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to sta\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.198677 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.198701 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.198709 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.198724 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.198732 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:21Z","lastTransitionTime":"2026-02-17T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:21 crc kubenswrapper[4680]: E0217 17:45:21.210937 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: E0217 17:45:21.211048 4680 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.213407 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.213456 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.213476 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.213500 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.213519 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:21Z","lastTransitionTime":"2026-02-17T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.213547 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.237438 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.248903 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.261751 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-psx9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1270b10e-8243-4940-b8a1-e0d36a0ef933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-psx9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.275962 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.287562 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.305332 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.316534 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.316574 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.316583 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.316595 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.316604 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:21Z","lastTransitionTime":"2026-02-17T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.318837 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.330351 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.341761 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.356421 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.368945 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:21Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.422567 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.422855 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.423047 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.423976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.424107 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:21Z","lastTransitionTime":"2026-02-17T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.526642 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.526730 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.526745 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.526767 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.526791 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:21Z","lastTransitionTime":"2026-02-17T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.629680 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.630003 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.630147 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.630283 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.630456 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:21Z","lastTransitionTime":"2026-02-17T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.733361 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.733414 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.733423 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.733439 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.733450 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:21Z","lastTransitionTime":"2026-02-17T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.835820 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.835860 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.835872 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.835888 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.835900 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:21Z","lastTransitionTime":"2026-02-17T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.937969 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.938002 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.938014 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.938028 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:21 crc kubenswrapper[4680]: I0217 17:45:21.938039 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:21Z","lastTransitionTime":"2026-02-17T17:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.027794 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:45:22 crc kubenswrapper[4680]: E0217 17:45:22.027955 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:45:54.027937447 +0000 UTC m=+83.578836126 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.040003 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.040048 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.040063 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.040078 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.040089 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:22Z","lastTransitionTime":"2026-02-17T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.075400 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:47:48.708869261 +0000 UTC Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.128803 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:22 crc kubenswrapper[4680]: E0217 17:45:22.129053 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 17:45:22 crc kubenswrapper[4680]: E0217 17:45:22.129487 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 17:45:22 crc kubenswrapper[4680]: E0217 17:45:22.129500 4680 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:45:22 crc kubenswrapper[4680]: E0217 17:45:22.129559 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:54.129540442 +0000 UTC m=+83.680439121 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:45:22 crc kubenswrapper[4680]: E0217 17:45:22.129602 4680 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.129450 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:22 crc kubenswrapper[4680]: E0217 17:45:22.129687 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:54.129664495 +0000 UTC m=+83.680563184 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.129800 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:22 crc kubenswrapper[4680]: E0217 17:45:22.129866 4680 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.129888 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:22 crc kubenswrapper[4680]: E0217 17:45:22.129909 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:54.129896442 +0000 UTC m=+83.680795141 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 17:45:22 crc kubenswrapper[4680]: E0217 17:45:22.129987 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 17:45:22 crc kubenswrapper[4680]: E0217 17:45:22.130003 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 17:45:22 crc kubenswrapper[4680]: E0217 17:45:22.130012 4680 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:45:22 crc kubenswrapper[4680]: E0217 17:45:22.130045 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 17:45:54.130036737 +0000 UTC m=+83.680935416 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.142182 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.142233 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.142242 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.142255 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.142264 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:22Z","lastTransitionTime":"2026-02-17T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.244425 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.244781 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.244850 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.244937 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.245002 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:22Z","lastTransitionTime":"2026-02-17T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.346659 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.346687 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.346696 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.346709 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.346718 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:22Z","lastTransitionTime":"2026-02-17T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.448994 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.449024 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.449032 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.449044 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.449054 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:22Z","lastTransitionTime":"2026-02-17T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.551863 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.551910 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.551923 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.551940 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.551954 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:22Z","lastTransitionTime":"2026-02-17T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.653850 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.653886 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.653898 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.653915 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.653924 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:22Z","lastTransitionTime":"2026-02-17T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.756005 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.756045 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.756055 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.756095 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.756104 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:22Z","lastTransitionTime":"2026-02-17T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.859306 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.859366 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.859376 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.859389 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.859400 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:22Z","lastTransitionTime":"2026-02-17T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.960913 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.960935 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.960943 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.960954 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:22 crc kubenswrapper[4680]: I0217 17:45:22.960963 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:22Z","lastTransitionTime":"2026-02-17T17:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.063145 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.063178 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.063187 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.063200 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.063209 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:23Z","lastTransitionTime":"2026-02-17T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.076524 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:53:25.553913001 +0000 UTC Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.105224 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.105246 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:23 crc kubenswrapper[4680]: E0217 17:45:23.105349 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.105401 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:23 crc kubenswrapper[4680]: E0217 17:45:23.105471 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:23 crc kubenswrapper[4680]: E0217 17:45:23.105546 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.105709 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:23 crc kubenswrapper[4680]: E0217 17:45:23.105773 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.165187 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.165408 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.165507 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.165577 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.165634 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:23Z","lastTransitionTime":"2026-02-17T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.268371 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.268421 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.268435 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.268453 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.268467 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:23Z","lastTransitionTime":"2026-02-17T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.370248 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.370280 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.370291 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.370306 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.370332 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:23Z","lastTransitionTime":"2026-02-17T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.471908 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.471945 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.471954 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.471969 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.471978 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:23Z","lastTransitionTime":"2026-02-17T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.555584 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.563812 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.569413 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:23Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.573683 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.573706 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.573714 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.573726 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.573735 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:23Z","lastTransitionTime":"2026-02-17T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.581938 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:23Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.598736 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:23Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.610587 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:23Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.622286 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:23Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.634033 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:23Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.650294 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:16Z\\\",\\\"message\\\":\\\"03] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004754 6239 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004762 6239 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0217 17:45:16.004768 6239 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0217 17:45:16.004772 6239 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004733 6239 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0217 17:45:16.004783 6239 services_controller.go:453] Built service openshift-machine-api/machine-api-controllers template LB for network=default: []services.LB{}\\\\nF0217 17:45:16.004785 6239 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to sta\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:23Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.660745 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:23Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.675451 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.675485 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.675499 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.675514 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.675525 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:23Z","lastTransitionTime":"2026-02-17T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.677678 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:23Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.687429 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:23Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.696722 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:23Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.707513 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:23Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.718938 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:23Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.728042 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:23Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.737510 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-psx9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1270b10e-8243-4940-b8a1-e0d36a0ef933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-psx9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:23Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.747424 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:23Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.756477 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:23Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.778199 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.778238 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.778251 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.778266 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.778277 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:23Z","lastTransitionTime":"2026-02-17T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.880215 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.880374 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.880385 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.880397 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.880405 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:23Z","lastTransitionTime":"2026-02-17T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.982397 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.982430 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.982438 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.982450 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:23 crc kubenswrapper[4680]: I0217 17:45:23.982458 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:23Z","lastTransitionTime":"2026-02-17T17:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.077173 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:05:58.848170323 +0000 UTC Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.084920 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.084953 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.084960 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.084974 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.084983 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:24Z","lastTransitionTime":"2026-02-17T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.186989 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.187021 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.187030 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.187050 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.187060 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:24Z","lastTransitionTime":"2026-02-17T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.289024 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.289059 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.289068 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.289082 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.289091 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:24Z","lastTransitionTime":"2026-02-17T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.391632 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.391658 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.391668 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.391684 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.391696 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:24Z","lastTransitionTime":"2026-02-17T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.493661 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.493713 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.493732 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.493755 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.493772 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:24Z","lastTransitionTime":"2026-02-17T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.596530 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.596591 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.596612 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.596640 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.596660 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:24Z","lastTransitionTime":"2026-02-17T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.699418 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.699490 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.699516 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.699546 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.699572 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:24Z","lastTransitionTime":"2026-02-17T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.802444 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.802483 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.802495 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.802510 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.802521 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:24Z","lastTransitionTime":"2026-02-17T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.905043 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.905087 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.905102 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.905123 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:24 crc kubenswrapper[4680]: I0217 17:45:24.905135 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:24Z","lastTransitionTime":"2026-02-17T17:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.007576 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.007617 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.007627 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.007641 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.007651 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:25Z","lastTransitionTime":"2026-02-17T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.077888 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:03:05.63731744 +0000 UTC Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.105238 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.105312 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.105379 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:25 crc kubenswrapper[4680]: E0217 17:45:25.105507 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:25 crc kubenswrapper[4680]: E0217 17:45:25.105602 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:25 crc kubenswrapper[4680]: E0217 17:45:25.105702 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.105517 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:25 crc kubenswrapper[4680]: E0217 17:45:25.105995 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.109101 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.109133 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.109142 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.109165 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.109173 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:25Z","lastTransitionTime":"2026-02-17T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.210983 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.211034 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.211043 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.211058 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.211068 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:25Z","lastTransitionTime":"2026-02-17T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.313273 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.313341 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.313366 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.313386 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.313398 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:25Z","lastTransitionTime":"2026-02-17T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.415576 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.415613 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.415624 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.415639 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.415650 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:25Z","lastTransitionTime":"2026-02-17T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.517571 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.517612 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.517622 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.517637 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.517647 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:25Z","lastTransitionTime":"2026-02-17T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.620034 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.620071 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.620080 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.620094 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.620106 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:25Z","lastTransitionTime":"2026-02-17T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.722714 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.722761 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.722775 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.722793 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.722805 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:25Z","lastTransitionTime":"2026-02-17T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.825369 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.825409 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.825419 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.825432 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.825441 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:25Z","lastTransitionTime":"2026-02-17T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.927830 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.927856 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.927867 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.927898 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:25 crc kubenswrapper[4680]: I0217 17:45:25.927908 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:25Z","lastTransitionTime":"2026-02-17T17:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.029614 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.029660 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.029671 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.029686 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.029697 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:26Z","lastTransitionTime":"2026-02-17T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.078298 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:20:38.057111338 +0000 UTC Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.132631 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.132703 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.132721 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.132747 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.132768 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:26Z","lastTransitionTime":"2026-02-17T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.235834 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.235904 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.235925 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.235950 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.235967 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:26Z","lastTransitionTime":"2026-02-17T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.338796 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.339192 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.339397 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.339590 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.339742 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:26Z","lastTransitionTime":"2026-02-17T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.442786 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.442838 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.442854 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.442876 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.442893 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:26Z","lastTransitionTime":"2026-02-17T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.545204 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.545539 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.545698 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.545886 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.546131 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:26Z","lastTransitionTime":"2026-02-17T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.649360 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.649802 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.649947 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.650138 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.650284 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:26Z","lastTransitionTime":"2026-02-17T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.753392 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.753456 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.753478 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.753504 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.753524 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:26Z","lastTransitionTime":"2026-02-17T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.856432 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.856489 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.856511 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.856537 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.856556 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:26Z","lastTransitionTime":"2026-02-17T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.959414 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.959483 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.959506 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.959532 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:26 crc kubenswrapper[4680]: I0217 17:45:26.959552 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:26Z","lastTransitionTime":"2026-02-17T17:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.061572 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.061622 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.061640 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.061669 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.061686 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:27Z","lastTransitionTime":"2026-02-17T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.078740 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:41:32.395599058 +0000 UTC Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.104283 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.104447 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:27 crc kubenswrapper[4680]: E0217 17:45:27.104467 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:27 crc kubenswrapper[4680]: E0217 17:45:27.104615 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.104871 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:27 crc kubenswrapper[4680]: E0217 17:45:27.105084 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.105114 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:27 crc kubenswrapper[4680]: E0217 17:45:27.105342 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.164386 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.164422 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.164430 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.164442 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.164450 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:27Z","lastTransitionTime":"2026-02-17T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.266379 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.266655 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.266715 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.266810 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.266888 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:27Z","lastTransitionTime":"2026-02-17T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.369002 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.369242 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.369344 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.369464 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.369541 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:27Z","lastTransitionTime":"2026-02-17T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.471861 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.471916 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.471933 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.471954 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.471971 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:27Z","lastTransitionTime":"2026-02-17T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.575030 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.575058 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.575067 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.575079 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.575089 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:27Z","lastTransitionTime":"2026-02-17T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.678059 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.678109 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.678122 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.678137 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.678147 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:27Z","lastTransitionTime":"2026-02-17T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.782413 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.782460 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.782472 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.782489 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.782505 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:27Z","lastTransitionTime":"2026-02-17T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.885104 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.885167 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.885176 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.885189 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.885197 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:27Z","lastTransitionTime":"2026-02-17T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.986999 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.987033 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.987043 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.987057 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:27 crc kubenswrapper[4680]: I0217 17:45:27.987067 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:27Z","lastTransitionTime":"2026-02-17T17:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.079413 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 20:02:27.556542675 +0000 UTC Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.089886 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.089953 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.089976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.090005 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.090027 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:28Z","lastTransitionTime":"2026-02-17T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.192667 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.192711 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.192727 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.192748 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.192764 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:28Z","lastTransitionTime":"2026-02-17T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.294727 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.294792 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.294804 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.294820 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.294834 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:28Z","lastTransitionTime":"2026-02-17T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.396687 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.396728 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.396739 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.396755 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.396765 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:28Z","lastTransitionTime":"2026-02-17T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.499988 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.500030 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.500039 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.500054 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.500063 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:28Z","lastTransitionTime":"2026-02-17T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.602890 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.603198 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.603402 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.603568 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.603693 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:28Z","lastTransitionTime":"2026-02-17T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.706386 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.706419 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.706431 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.706445 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.706456 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:28Z","lastTransitionTime":"2026-02-17T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.809528 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.809988 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.810228 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.810433 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.810873 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:28Z","lastTransitionTime":"2026-02-17T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.914034 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.914075 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.914089 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.914105 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:28 crc kubenswrapper[4680]: I0217 17:45:28.914115 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:28Z","lastTransitionTime":"2026-02-17T17:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.016263 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.016358 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.016379 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.016404 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.016422 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:29Z","lastTransitionTime":"2026-02-17T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.080385 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 21:25:31.989799812 +0000 UTC Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.105229 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:29 crc kubenswrapper[4680]: E0217 17:45:29.105521 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.105250 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:29 crc kubenswrapper[4680]: E0217 17:45:29.105732 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.105369 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:29 crc kubenswrapper[4680]: E0217 17:45:29.105948 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.105229 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:29 crc kubenswrapper[4680]: E0217 17:45:29.106109 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.118259 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.118360 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.118381 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.118404 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.118423 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:29Z","lastTransitionTime":"2026-02-17T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.220996 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.221305 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.221423 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.221498 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.221578 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:29Z","lastTransitionTime":"2026-02-17T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.325028 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.325069 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.325082 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.325098 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.325110 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:29Z","lastTransitionTime":"2026-02-17T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.429256 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.429734 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.429862 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.429997 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.430162 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:29Z","lastTransitionTime":"2026-02-17T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.532845 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.533081 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.533196 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.533290 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.533399 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:29Z","lastTransitionTime":"2026-02-17T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.636432 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.636465 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.636497 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.636514 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.636523 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:29Z","lastTransitionTime":"2026-02-17T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.738936 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.739226 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.739351 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.739457 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.739558 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:29Z","lastTransitionTime":"2026-02-17T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.841900 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.841939 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.841950 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.841968 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.841980 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:29Z","lastTransitionTime":"2026-02-17T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.944975 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.945029 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.945042 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.945058 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:29 crc kubenswrapper[4680]: I0217 17:45:29.945074 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:29Z","lastTransitionTime":"2026-02-17T17:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.049083 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.049141 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.049155 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.049174 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.049188 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:30Z","lastTransitionTime":"2026-02-17T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.080565 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 00:20:10.00940871 +0000 UTC Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.106625 4680 scope.go:117] "RemoveContainer" containerID="1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1" Feb 17 17:45:30 crc kubenswrapper[4680]: E0217 17:45:30.106838 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.151440 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.151695 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.151782 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.151913 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.152003 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:30Z","lastTransitionTime":"2026-02-17T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.254904 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.255582 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.255681 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.255796 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.255946 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:30Z","lastTransitionTime":"2026-02-17T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.358265 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.358562 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.358630 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.358703 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.358822 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:30Z","lastTransitionTime":"2026-02-17T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.462247 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.462338 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.462351 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.462371 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.462382 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:30Z","lastTransitionTime":"2026-02-17T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.565892 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.565952 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.565976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.566007 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.566025 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:30Z","lastTransitionTime":"2026-02-17T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.669810 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.670139 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.670420 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.670528 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.670608 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:30Z","lastTransitionTime":"2026-02-17T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.775220 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.775725 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.775914 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.776125 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.776269 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:30Z","lastTransitionTime":"2026-02-17T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.879700 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.879763 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.879775 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.879790 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.879800 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:30Z","lastTransitionTime":"2026-02-17T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.981877 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.981920 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.981931 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.981958 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:30 crc kubenswrapper[4680]: I0217 17:45:30.981969 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:30Z","lastTransitionTime":"2026-02-17T17:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.081527 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:55:56.825168912 +0000 UTC Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.084445 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.084474 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.084483 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.084498 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.084508 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:31Z","lastTransitionTime":"2026-02-17T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.105509 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:31 crc kubenswrapper[4680]: E0217 17:45:31.105635 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.105645 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.105740 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:31 crc kubenswrapper[4680]: E0217 17:45:31.105859 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.105965 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:31 crc kubenswrapper[4680]: E0217 17:45:31.105951 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:31 crc kubenswrapper[4680]: E0217 17:45:31.106029 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.120938 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.133865 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.147165 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.162842 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.182736 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.187120 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.187205 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.187220 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.187255 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.187267 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:31Z","lastTransitionTime":"2026-02-17T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.196990 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.208703 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.223779 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.247428 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.269075 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:16Z\\\",\\\"message\\\":\\\"03] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004754 6239 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004762 6239 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0217 17:45:16.004768 6239 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0217 17:45:16.004772 6239 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004733 6239 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0217 17:45:16.004783 6239 services_controller.go:453] Built service openshift-machine-api/machine-api-controllers template LB for network=default: []services.LB{}\\\\nF0217 17:45:16.004785 6239 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to sta\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.286875 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.289869 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.289919 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.289929 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.289944 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.289953 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:31Z","lastTransitionTime":"2026-02-17T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.302350 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82bbc557-cf3e-453a-8485-d987ca457549\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b524e303b5baf95c0a7b7716975c1ab1755579270f3097d585466503c56a9c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de5ddc06e863eadae5ce325db8f0a2ba74088e8f27704d5a5029a4ab1f666d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b337f7427aa447a1d32260cf850cfee8550ef1926c808b4462f07f3142c49a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.323203 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.335647 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.347060 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-psx9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1270b10e-8243-4940-b8a1-e0d36a0ef933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-psx9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.364186 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.376167 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.387487 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.392186 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.392228 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.392239 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.392254 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.392265 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:31Z","lastTransitionTime":"2026-02-17T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.393350 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.393383 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.393394 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.393428 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.393438 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:31Z","lastTransitionTime":"2026-02-17T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:31 crc kubenswrapper[4680]: E0217 17:45:31.404176 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.408359 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.408393 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.408402 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.408415 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.408426 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:31Z","lastTransitionTime":"2026-02-17T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:31 crc kubenswrapper[4680]: E0217 17:45:31.419820 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.423156 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.423215 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.423231 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.423247 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.423664 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:31Z","lastTransitionTime":"2026-02-17T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:31 crc kubenswrapper[4680]: E0217 17:45:31.435426 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.438920 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.438949 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.438958 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.438971 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.438980 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:31Z","lastTransitionTime":"2026-02-17T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:31 crc kubenswrapper[4680]: E0217 17:45:31.450081 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.453187 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.453305 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.453407 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.453480 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.453557 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:31Z","lastTransitionTime":"2026-02-17T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:31 crc kubenswrapper[4680]: E0217 17:45:31.465225 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:31 crc kubenswrapper[4680]: E0217 17:45:31.465380 4680 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.494746 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.494789 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.494802 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.494818 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.494830 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:31Z","lastTransitionTime":"2026-02-17T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.597056 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.597096 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.597105 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.597117 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.597126 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:31Z","lastTransitionTime":"2026-02-17T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.699849 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.699885 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.699893 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.699906 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.699916 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:31Z","lastTransitionTime":"2026-02-17T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.802382 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.802421 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.802430 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.802443 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.802454 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:31Z","lastTransitionTime":"2026-02-17T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.904474 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.904516 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.904527 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.904543 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:31 crc kubenswrapper[4680]: I0217 17:45:31.904552 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:31Z","lastTransitionTime":"2026-02-17T17:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.007306 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.007362 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.007373 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.007386 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.007396 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:32Z","lastTransitionTime":"2026-02-17T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.082725 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:47:32.885227615 +0000 UTC Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.110297 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.110372 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.110386 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.110403 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.110415 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:32Z","lastTransitionTime":"2026-02-17T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.213446 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.213490 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.213501 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.213517 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.213526 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:32Z","lastTransitionTime":"2026-02-17T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.316108 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.316152 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.316165 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.316181 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.316192 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:32Z","lastTransitionTime":"2026-02-17T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.418159 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.418194 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.418205 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.418221 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.418232 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:32Z","lastTransitionTime":"2026-02-17T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.520453 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.520489 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.520498 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.520511 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.520519 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:32Z","lastTransitionTime":"2026-02-17T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.622642 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.622680 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.622691 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.622707 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.622740 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:32Z","lastTransitionTime":"2026-02-17T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.725411 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.725447 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.725461 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.725477 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.725487 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:32Z","lastTransitionTime":"2026-02-17T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.828361 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.828410 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.828423 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.828440 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.828452 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:32Z","lastTransitionTime":"2026-02-17T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.930553 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.930596 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.930609 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.930628 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:32 crc kubenswrapper[4680]: I0217 17:45:32.930641 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:32Z","lastTransitionTime":"2026-02-17T17:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.033727 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.033796 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.033815 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.033843 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.033863 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:33Z","lastTransitionTime":"2026-02-17T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.083559 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:22:20.404270317 +0000 UTC Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.105058 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.105116 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.105158 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:33 crc kubenswrapper[4680]: E0217 17:45:33.105289 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.105311 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:33 crc kubenswrapper[4680]: E0217 17:45:33.105458 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:33 crc kubenswrapper[4680]: E0217 17:45:33.105621 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:33 crc kubenswrapper[4680]: E0217 17:45:33.105757 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.136299 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.136366 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.136385 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.136405 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.136415 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:33Z","lastTransitionTime":"2026-02-17T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.239297 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.239367 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.239381 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.239399 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.239440 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:33Z","lastTransitionTime":"2026-02-17T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.341817 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.341857 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.341869 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.341884 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.341896 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:33Z","lastTransitionTime":"2026-02-17T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.444206 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.444249 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.444258 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.444271 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.444279 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:33Z","lastTransitionTime":"2026-02-17T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.546395 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.546436 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.546449 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.546463 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.546472 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:33Z","lastTransitionTime":"2026-02-17T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.648269 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.648346 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.648363 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.648383 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.648395 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:33Z","lastTransitionTime":"2026-02-17T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.750909 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.750943 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.750954 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.750989 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.751004 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:33Z","lastTransitionTime":"2026-02-17T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.853169 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.853211 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.853223 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.853239 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.853249 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:33Z","lastTransitionTime":"2026-02-17T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.955614 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.955651 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.955662 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.955678 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:33 crc kubenswrapper[4680]: I0217 17:45:33.955689 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:33Z","lastTransitionTime":"2026-02-17T17:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.058368 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.058650 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.058662 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.058676 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.058686 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:34Z","lastTransitionTime":"2026-02-17T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.084606 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 23:42:58.155218648 +0000 UTC Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.160875 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.160910 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.160919 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.160934 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.160944 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:34Z","lastTransitionTime":"2026-02-17T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.263124 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.263154 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.263162 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.263175 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.263183 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:34Z","lastTransitionTime":"2026-02-17T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.365222 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.365282 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.365298 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.365362 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.365381 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:34Z","lastTransitionTime":"2026-02-17T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.467519 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.467557 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.467568 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.467584 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.467595 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:34Z","lastTransitionTime":"2026-02-17T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.572048 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.572248 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.572265 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.572278 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.572310 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:34Z","lastTransitionTime":"2026-02-17T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.674436 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.674471 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.674480 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.674492 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.674501 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:34Z","lastTransitionTime":"2026-02-17T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.776651 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.776691 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.776707 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.776727 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.776741 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:34Z","lastTransitionTime":"2026-02-17T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.878660 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.878698 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.878709 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.878724 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.878735 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:34Z","lastTransitionTime":"2026-02-17T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.980922 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.980957 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.980968 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.980983 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:34 crc kubenswrapper[4680]: I0217 17:45:34.980994 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:34Z","lastTransitionTime":"2026-02-17T17:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.083516 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.083547 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.083557 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.083571 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.083581 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:35Z","lastTransitionTime":"2026-02-17T17:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.085162 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 19:57:57.696344964 +0000 UTC Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.105499 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.105537 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:35 crc kubenswrapper[4680]: E0217 17:45:35.105615 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.105685 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:35 crc kubenswrapper[4680]: E0217 17:45:35.105750 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.105499 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:35 crc kubenswrapper[4680]: E0217 17:45:35.105815 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:35 crc kubenswrapper[4680]: E0217 17:45:35.105876 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.185833 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.185879 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.185891 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.185909 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.185921 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:35Z","lastTransitionTime":"2026-02-17T17:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.287958 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.288010 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.288020 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.288034 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.288044 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:35Z","lastTransitionTime":"2026-02-17T17:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.390580 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.390616 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.390626 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.390640 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.390649 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:35Z","lastTransitionTime":"2026-02-17T17:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.492898 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.492927 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.492936 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.492948 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.492957 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:35Z","lastTransitionTime":"2026-02-17T17:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.595484 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.595530 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.595542 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.595558 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.595570 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:35Z","lastTransitionTime":"2026-02-17T17:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.697690 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.697737 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.697750 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.697766 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.697779 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:35Z","lastTransitionTime":"2026-02-17T17:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.800807 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.800863 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.800883 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.800902 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.800915 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:35Z","lastTransitionTime":"2026-02-17T17:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.903593 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.903659 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.903669 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.903700 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:35 crc kubenswrapper[4680]: I0217 17:45:35.903710 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:35Z","lastTransitionTime":"2026-02-17T17:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.006135 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.006177 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.006186 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.006200 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.006209 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:36Z","lastTransitionTime":"2026-02-17T17:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.085738 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:00:04.587571753 +0000 UTC Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.109695 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.109743 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.109752 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.109765 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.109775 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:36Z","lastTransitionTime":"2026-02-17T17:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.117227 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.212648 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.212703 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.212719 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.212742 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.212759 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:36Z","lastTransitionTime":"2026-02-17T17:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.315556 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.315636 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.315650 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.315666 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.315678 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:36Z","lastTransitionTime":"2026-02-17T17:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.417655 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.417686 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.417698 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.417713 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.417724 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:36Z","lastTransitionTime":"2026-02-17T17:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.519942 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.519974 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.519986 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.520003 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.520017 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:36Z","lastTransitionTime":"2026-02-17T17:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.622255 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.622293 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.622305 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.622343 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.622364 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:36Z","lastTransitionTime":"2026-02-17T17:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.724294 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.724346 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.724354 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.724366 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.724375 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:36Z","lastTransitionTime":"2026-02-17T17:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.826205 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.826255 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.826267 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.826286 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.826299 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:36Z","lastTransitionTime":"2026-02-17T17:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.876762 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs\") pod \"network-metrics-daemon-psx9f\" (UID: \"1270b10e-8243-4940-b8a1-e0d36a0ef933\") " pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:36 crc kubenswrapper[4680]: E0217 17:45:36.876933 4680 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 17:45:36 crc kubenswrapper[4680]: E0217 17:45:36.877043 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs podName:1270b10e-8243-4940-b8a1-e0d36a0ef933 nodeName:}" failed. No retries permitted until 2026-02-17 17:46:08.877021872 +0000 UTC m=+98.427920611 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs") pod "network-metrics-daemon-psx9f" (UID: "1270b10e-8243-4940-b8a1-e0d36a0ef933") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.928583 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.928609 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.928618 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.928629 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:36 crc kubenswrapper[4680]: I0217 17:45:36.928639 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:36Z","lastTransitionTime":"2026-02-17T17:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.030525 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.030549 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.030557 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.030569 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.030576 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:37Z","lastTransitionTime":"2026-02-17T17:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.085874 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 19:52:52.413168688 +0000 UTC Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.104603 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.104664 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:37 crc kubenswrapper[4680]: E0217 17:45:37.104724 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:37 crc kubenswrapper[4680]: E0217 17:45:37.104774 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.104861 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.104943 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:37 crc kubenswrapper[4680]: E0217 17:45:37.105052 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:37 crc kubenswrapper[4680]: E0217 17:45:37.105156 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.132539 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.132621 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.132646 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.132676 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.132704 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:37Z","lastTransitionTime":"2026-02-17T17:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.235556 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.235609 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.235620 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.235634 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.235645 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:37Z","lastTransitionTime":"2026-02-17T17:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.337893 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.337925 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.337936 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.337951 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.337961 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:37Z","lastTransitionTime":"2026-02-17T17:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.440411 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.440442 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.440451 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.440465 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.440475 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:37Z","lastTransitionTime":"2026-02-17T17:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.542568 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.542593 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.542602 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.542614 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.542623 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:37Z","lastTransitionTime":"2026-02-17T17:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.644400 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.644441 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.644460 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.644478 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.644489 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:37Z","lastTransitionTime":"2026-02-17T17:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.746839 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.746875 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.746886 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.746908 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.746918 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:37Z","lastTransitionTime":"2026-02-17T17:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.848592 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.848872 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.848953 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.849041 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.849121 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:37Z","lastTransitionTime":"2026-02-17T17:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.950991 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.951259 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.951365 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.951459 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:37 crc kubenswrapper[4680]: I0217 17:45:37.951554 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:37Z","lastTransitionTime":"2026-02-17T17:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.053734 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.053766 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.053775 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.053788 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.053796 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:38Z","lastTransitionTime":"2026-02-17T17:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.086776 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 03:44:45.633337761 +0000 UTC Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.155865 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.156103 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.156295 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.156430 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.156536 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:38Z","lastTransitionTime":"2026-02-17T17:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.258922 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.258954 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.258963 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.258976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.258984 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:38Z","lastTransitionTime":"2026-02-17T17:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.360816 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.360859 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.360869 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.360886 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.360897 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:38Z","lastTransitionTime":"2026-02-17T17:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.462446 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.462690 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.462772 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.462872 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.462958 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:38Z","lastTransitionTime":"2026-02-17T17:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.565005 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.565046 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.565057 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.565074 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.565082 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:38Z","lastTransitionTime":"2026-02-17T17:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.666903 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.666938 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.666948 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.666963 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.666974 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:38Z","lastTransitionTime":"2026-02-17T17:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.768611 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.768640 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.768649 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.768660 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.768669 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:38Z","lastTransitionTime":"2026-02-17T17:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.870755 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.870794 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.870804 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.870826 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.870834 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:38Z","lastTransitionTime":"2026-02-17T17:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.973243 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.973516 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.973602 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.973679 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:38 crc kubenswrapper[4680]: I0217 17:45:38.973766 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:38Z","lastTransitionTime":"2026-02-17T17:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.076455 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.076743 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.076854 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.076977 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.077087 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:39Z","lastTransitionTime":"2026-02-17T17:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.087758 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:13:21.805609553 +0000 UTC Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.105192 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.105259 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:39 crc kubenswrapper[4680]: E0217 17:45:39.105352 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:39 crc kubenswrapper[4680]: E0217 17:45:39.105442 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.105490 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:39 crc kubenswrapper[4680]: E0217 17:45:39.105591 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.105601 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:39 crc kubenswrapper[4680]: E0217 17:45:39.105687 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.178914 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.179177 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.179264 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.179372 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.179498 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:39Z","lastTransitionTime":"2026-02-17T17:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.282485 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.282746 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.282839 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.282937 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.283028 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:39Z","lastTransitionTime":"2026-02-17T17:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.385069 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.385113 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.385124 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.385139 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.385149 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:39Z","lastTransitionTime":"2026-02-17T17:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.476462 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t2c5v_c056df58-0160-4cf6-ae0c-dc7095c8be95/kube-multus/0.log" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.476513 4680 generic.go:334] "Generic (PLEG): container finished" podID="c056df58-0160-4cf6-ae0c-dc7095c8be95" containerID="7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c" exitCode=1 Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.476543 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t2c5v" event={"ID":"c056df58-0160-4cf6-ae0c-dc7095c8be95","Type":"ContainerDied","Data":"7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c"} Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.476880 4680 scope.go:117] "RemoveContainer" containerID="7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.486951 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.486972 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.487006 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.487019 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.487026 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:39Z","lastTransitionTime":"2026-02-17T17:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.490538 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0abaca2-452b-4f7a-b6bf-5d564756ff0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df64de79d0a8a7383c4a9142c8c6949ca0c73de37c7d379879e869693520d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.502501 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.516230 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.527330 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.536826 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-psx9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1270b10e-8243-4940-b8a1-e0d36a0ef933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-psx9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.549149 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.559003 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.571762 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.583784 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.588796 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.588854 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.588866 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.588880 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.588914 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:39Z","lastTransitionTime":"2026-02-17T17:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.595754 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.607869 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:38Z\\\",\\\"message\\\":\\\"2026-02-17T17:44:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6ee96071-5538-48fb-83e2-059881b79f4b\\\\n2026-02-17T17:44:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6ee96071-5538-48fb-83e2-059881b79f4b to /host/opt/cni/bin/\\\\n2026-02-17T17:44:53Z [verbose] multus-daemon started\\\\n2026-02-17T17:44:53Z [verbose] Readiness Indicator file check\\\\n2026-02-17T17:45:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.617488 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.629750 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.647911 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:16Z\\\",\\\"message\\\":\\\"03] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004754 6239 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004762 6239 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0217 17:45:16.004768 6239 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0217 17:45:16.004772 6239 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004733 6239 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0217 17:45:16.004783 6239 services_controller.go:453] Built service openshift-machine-api/machine-api-controllers template LB for network=default: []services.LB{}\\\\nF0217 17:45:16.004785 6239 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to sta\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.659751 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.671146 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82bbc557-cf3e-453a-8485-d987ca457549\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b524e303b5baf95c0a7b7716975c1ab1755579270f3097d585466503c56a9c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de5ddc06e863eadae5ce325db8f0a2ba74088e8f27704d5a5029a4ab1f666d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b337f7427aa447a1d32260cf850cfee8550ef1926c808b4462f07f3142c49a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.687248 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.690859 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.690897 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.690907 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.690922 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.690934 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:39Z","lastTransitionTime":"2026-02-17T17:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.698245 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.707334 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.793356 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.793421 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.793433 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.793445 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.793453 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:39Z","lastTransitionTime":"2026-02-17T17:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.895398 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.895436 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.895448 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.895466 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.895504 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:39Z","lastTransitionTime":"2026-02-17T17:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.997641 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.997679 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.997688 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.997701 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:39 crc kubenswrapper[4680]: I0217 17:45:39.997709 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:39Z","lastTransitionTime":"2026-02-17T17:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.088551 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 12:32:45.490353668 +0000 UTC Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.100365 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.100408 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.100419 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.100439 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.100449 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:40Z","lastTransitionTime":"2026-02-17T17:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.202648 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.202683 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.202692 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.202707 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.202716 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:40Z","lastTransitionTime":"2026-02-17T17:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.304684 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.304732 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.304744 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.304762 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.304776 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:40Z","lastTransitionTime":"2026-02-17T17:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.407117 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.407167 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.407181 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.407200 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.407213 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:40Z","lastTransitionTime":"2026-02-17T17:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.481236 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t2c5v_c056df58-0160-4cf6-ae0c-dc7095c8be95/kube-multus/0.log" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.481292 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t2c5v" event={"ID":"c056df58-0160-4cf6-ae0c-dc7095c8be95","Type":"ContainerStarted","Data":"ff98cc275fec6841e32dc0655c3f31f7544792122c4a96cf4db0e51d52a4065d"} Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.490880 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0abaca2-452b-4f7a-b6bf-5d564756ff0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df64de79d0a8a7383c4a9142c8c6949ca0c73de37c7d379879e869693520d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.502241 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.509463 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.509495 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.509504 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.509516 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.509527 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:40Z","lastTransitionTime":"2026-02-17T17:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.511672 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.520715 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.530366 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-psx9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1270b10e-8243-4940-b8a1-e0d36a0ef933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-psx9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.543731 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.552327 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.563794 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.574975 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.585545 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.595799 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff98cc275fec6841e32dc0655c3f31f7544792122c4a96cf4db0e51d52a4065d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:38Z\\\",\\\"message\\\":\\\"2026-02-17T17:44:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6ee96071-5538-48fb-83e2-059881b79f4b\\\\n2026-02-17T17:44:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6ee96071-5538-48fb-83e2-059881b79f4b to /host/opt/cni/bin/\\\\n2026-02-17T17:44:53Z [verbose] multus-daemon started\\\\n2026-02-17T17:44:53Z [verbose] Readiness Indicator file check\\\\n2026-02-17T17:45:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.612108 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.612144 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.612155 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.612174 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.612185 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:40Z","lastTransitionTime":"2026-02-17T17:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.617037 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:16Z\\\",\\\"message\\\":\\\"03] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004754 6239 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004762 6239 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0217 17:45:16.004768 6239 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0217 17:45:16.004772 6239 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004733 6239 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0217 17:45:16.004783 6239 services_controller.go:453] Built service openshift-machine-api/machine-api-controllers template LB for network=default: []services.LB{}\\\\nF0217 17:45:16.004785 6239 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to sta\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.631139 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.643077 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82bbc557-cf3e-453a-8485-d987ca457549\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b524e303b5baf95c0a7b7716975c1ab1755579270f3097d585466503c56a9c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de5ddc06e863eadae5ce325db8f0a2ba74088e8f27704d5a5029a4ab1f666d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b337f7427aa447a1d32260cf850cfee8550ef1926c808b4462f07f3142c49a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.668114 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.681898 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.692616 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.703837 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.716482 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.716510 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.716541 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.716556 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.716570 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:40Z","lastTransitionTime":"2026-02-17T17:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.718570 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.818628 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.818698 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.818711 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.818732 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.818749 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:40Z","lastTransitionTime":"2026-02-17T17:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.921385 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.921762 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.921857 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.921939 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:40 crc kubenswrapper[4680]: I0217 17:45:40.922040 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:40Z","lastTransitionTime":"2026-02-17T17:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.023926 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.023962 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.023973 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.023988 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.023997 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:41Z","lastTransitionTime":"2026-02-17T17:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.088964 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 07:21:15.914644432 +0000 UTC Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.104201 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.104260 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.104279 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.104303 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:41 crc kubenswrapper[4680]: E0217 17:45:41.104682 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:41 crc kubenswrapper[4680]: E0217 17:45:41.104908 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:41 crc kubenswrapper[4680]: E0217 17:45:41.104995 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:41 crc kubenswrapper[4680]: E0217 17:45:41.105191 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.115183 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0abaca2-452b-4f7a-b6bf-5d564756ff0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df64de79d0a8a7383c4a9142c8c6949ca0c73de37c7d379879e869693520d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.130441 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.132501 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.132528 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.132536 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.132548 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.132555 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:41Z","lastTransitionTime":"2026-02-17T17:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.176911 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.197093 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.213538 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-psx9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1270b10e-8243-4940-b8a1-e0d36a0ef933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-psx9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.226951 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.234874 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.234908 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.234919 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.234935 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.234947 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:41Z","lastTransitionTime":"2026-02-17T17:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.236221 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.248491 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.259132 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.269677 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.280841 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff98cc275fec6841e32dc0655c3f31f7544792122c4a96cf4db0e51d52a4065d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:38Z\\\",\\\"message\\\":\\\"2026-02-17T17:44:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6ee96071-5538-48fb-83e2-059881b79f4b\\\\n2026-02-17T17:44:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6ee96071-5538-48fb-83e2-059881b79f4b to /host/opt/cni/bin/\\\\n2026-02-17T17:44:53Z [verbose] multus-daemon started\\\\n2026-02-17T17:44:53Z [verbose] Readiness Indicator file check\\\\n2026-02-17T17:45:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.291133 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.301934 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82bbc557-cf3e-453a-8485-d987ca457549\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b524e303b5baf95c0a7b7716975c1ab1755579270f3097d585466503c56a9c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de5ddc06e863eadae5ce325db8f0a2ba74088e8f27704d5a5029a4ab1f666d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b337f7427aa447a1d32260cf850cfee8550ef1926c808b4462f07f3142c49a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.323212 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.334876 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.336475 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.336508 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.336519 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.336533 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.336543 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:41Z","lastTransitionTime":"2026-02-17T17:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.344452 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.354927 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.368931 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.387530 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:16Z\\\",\\\"message\\\":\\\"03] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004754 6239 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004762 6239 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0217 17:45:16.004768 6239 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0217 17:45:16.004772 6239 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004733 6239 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0217 17:45:16.004783 6239 services_controller.go:453] Built service openshift-machine-api/machine-api-controllers template LB for network=default: []services.LB{}\\\\nF0217 17:45:16.004785 6239 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to sta\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.439333 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.439368 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.439378 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.439393 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.439404 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:41Z","lastTransitionTime":"2026-02-17T17:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.541564 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.541594 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.541602 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.541615 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.541624 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:41Z","lastTransitionTime":"2026-02-17T17:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.553575 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.553629 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.553641 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.553659 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.553672 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:41Z","lastTransitionTime":"2026-02-17T17:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:41 crc kubenswrapper[4680]: E0217 17:45:41.566842 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.570025 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.570056 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.570067 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.570081 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.570091 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:41Z","lastTransitionTime":"2026-02-17T17:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:41 crc kubenswrapper[4680]: E0217 17:45:41.581295 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.584266 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.584327 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.584339 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.584355 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.584366 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:41Z","lastTransitionTime":"2026-02-17T17:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:41 crc kubenswrapper[4680]: E0217 17:45:41.596786 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.599910 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.599955 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.599970 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.599988 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.600001 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:41Z","lastTransitionTime":"2026-02-17T17:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:41 crc kubenswrapper[4680]: E0217 17:45:41.612265 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.614897 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.614935 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.614950 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.614965 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.614980 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:41Z","lastTransitionTime":"2026-02-17T17:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:41 crc kubenswrapper[4680]: E0217 17:45:41.627877 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:41 crc kubenswrapper[4680]: E0217 17:45:41.627981 4680 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.643498 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.643576 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.643590 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.643612 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.643625 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:41Z","lastTransitionTime":"2026-02-17T17:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.745336 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.745371 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.745379 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.745394 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.745404 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:41Z","lastTransitionTime":"2026-02-17T17:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.847748 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.847786 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.847794 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.847808 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.847818 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:41Z","lastTransitionTime":"2026-02-17T17:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.949476 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.949508 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.949518 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.949531 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:41 crc kubenswrapper[4680]: I0217 17:45:41.949540 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:41Z","lastTransitionTime":"2026-02-17T17:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.051393 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.051434 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.051445 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.051459 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.051468 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:42Z","lastTransitionTime":"2026-02-17T17:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.089767 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 18:50:07.087376578 +0000 UTC Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.105768 4680 scope.go:117] "RemoveContainer" containerID="1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.154521 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.154867 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.154880 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.154916 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.154927 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:42Z","lastTransitionTime":"2026-02-17T17:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.257016 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.257114 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.257123 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.257138 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.257150 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:42Z","lastTransitionTime":"2026-02-17T17:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.359301 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.359342 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.359351 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.359364 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.359374 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:42Z","lastTransitionTime":"2026-02-17T17:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.460841 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.460869 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.460879 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.460892 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.460900 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:42Z","lastTransitionTime":"2026-02-17T17:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.487880 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovnkube-controller/2.log" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.490118 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerStarted","Data":"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af"} Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.490499 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.510476 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.527031 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.552396 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.563379 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.563417 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.563425 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.563439 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.563448 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:42Z","lastTransitionTime":"2026-02-17T17:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.583046 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.596110 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff98cc275fec6841e32dc0655c3f31f7544792122c4a96cf4db0e51d52a4065d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:38Z\\\",\\\"message\\\":\\\"2026-02-17T17:44:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6ee96071-5538-48fb-83e2-059881b79f4b\\\\n2026-02-17T17:44:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6ee96071-5538-48fb-83e2-059881b79f4b to /host/opt/cni/bin/\\\\n2026-02-17T17:44:53Z [verbose] multus-daemon started\\\\n2026-02-17T17:44:53Z [verbose] Readiness Indicator file check\\\\n2026-02-17T17:45:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.608215 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.619181 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82bbc557-cf3e-453a-8485-d987ca457549\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b524e303b5baf95c0a7b7716975c1ab1755579270f3097d585466503c56a9c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de5ddc06e863eadae5ce325db8f0a2ba74088e8f27704d5a5029a4ab1f666d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b337f7427aa447a1d32260cf850cfee8550ef1926c808b4462f07f3142c49a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.640122 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.652243 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.665260 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.665299 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.665311 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.665352 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.665361 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:42Z","lastTransitionTime":"2026-02-17T17:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.665716 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.676767 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.691875 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.714781 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:16Z\\\",\\\"message\\\":\\\"03] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004754 6239 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004762 6239 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0217 17:45:16.004768 6239 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0217 17:45:16.004772 6239 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004733 6239 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0217 17:45:16.004783 6239 services_controller.go:453] Built service openshift-machine-api/machine-api-controllers template LB for network=default: []services.LB{}\\\\nF0217 17:45:16.004785 6239 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to sta\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.729896 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.745791 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.761535 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.767426 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.767459 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.767469 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.767484 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.767495 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:42Z","lastTransitionTime":"2026-02-17T17:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.774253 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.789252 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-psx9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1270b10e-8243-4940-b8a1-e0d36a0ef933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-psx9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.802498 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0abaca2-452b-4f7a-b6bf-5d564756ff0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df64de79d0a8a7383c4a9142c8c6949ca0c73de37c7d379879e869693520d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:42Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.869977 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.870014 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.870027 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.870041 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.870051 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:42Z","lastTransitionTime":"2026-02-17T17:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.972460 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.972487 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.972495 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.972507 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:42 crc kubenswrapper[4680]: I0217 17:45:42.972515 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:42Z","lastTransitionTime":"2026-02-17T17:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.075180 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.075205 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.075214 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.075226 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.075234 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:43Z","lastTransitionTime":"2026-02-17T17:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.089942 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 02:51:03.728943909 +0000 UTC Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.104576 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.104681 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:43 crc kubenswrapper[4680]: E0217 17:45:43.104765 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.104929 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.104956 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:43 crc kubenswrapper[4680]: E0217 17:45:43.105000 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:43 crc kubenswrapper[4680]: E0217 17:45:43.105108 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:43 crc kubenswrapper[4680]: E0217 17:45:43.105209 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.177793 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.177826 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.177835 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.177846 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.177862 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:43Z","lastTransitionTime":"2026-02-17T17:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.280355 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.280398 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.280408 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.280423 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.280433 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:43Z","lastTransitionTime":"2026-02-17T17:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.382607 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.382682 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.382742 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.382761 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.382773 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:43Z","lastTransitionTime":"2026-02-17T17:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.485672 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.485712 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.485722 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.485737 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.485747 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:43Z","lastTransitionTime":"2026-02-17T17:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.496099 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovnkube-controller/3.log" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.496729 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovnkube-controller/2.log" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.498868 4680 generic.go:334] "Generic (PLEG): container finished" podID="2733b1c1-6f98-4184-b59b-123435eeb569" containerID="c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af" exitCode=1 Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.498907 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerDied","Data":"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af"} Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.498944 4680 scope.go:117] "RemoveContainer" containerID="1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.499478 4680 scope.go:117] "RemoveContainer" containerID="c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af" Feb 17 17:45:43 crc kubenswrapper[4680]: E0217 17:45:43.499617 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.520049 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.532150 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.544298 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.558021 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff98cc275fec6841e32dc0655c3f31f7544792122c4a96cf4db0e51d52a4065d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:38Z\\\",\\\"message\\\":\\\"2026-02-17T17:44:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6ee96071-5538-48fb-83e2-059881b79f4b\\\\n2026-02-17T17:44:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6ee96071-5538-48fb-83e2-059881b79f4b to /host/opt/cni/bin/\\\\n2026-02-17T17:44:53Z [verbose] multus-daemon started\\\\n2026-02-17T17:44:53Z [verbose] Readiness Indicator file check\\\\n2026-02-17T17:45:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.580341 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb0b65b5702e94ab1ddf321b85020cace62dd55923328e4a8acebea0830d7a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:16Z\\\",\\\"message\\\":\\\"03] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004754 6239 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004762 6239 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0217 17:45:16.004768 6239 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0217 17:45:16.004772 6239 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0217 17:45:16.004733 6239 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0217 17:45:16.004783 6239 services_controller.go:453] Built service openshift-machine-api/machine-api-controllers template LB for network=default: []services.LB{}\\\\nF0217 17:45:16.004785 6239 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to sta\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:42Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 17:45:42.895285 6615 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 17:45:42.895465 6615 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 17:45:42.895491 6615 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 17:45:42.895832 6615 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 17:45:42.896150 6615 factory.go:656] Stopping watch factory\\\\nI0217 17:45:42.945095 6615 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0217 17:45:42.945123 6615 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0217 17:45:42.945170 6615 ovnkube.go:599] Stopped ovnkube\\\\nI0217 17:45:42.945192 6615 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 17:45:42.945268 6615 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.588140 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.588176 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.588191 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.588206 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.588218 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:43Z","lastTransitionTime":"2026-02-17T17:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.594548 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.606714 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82bbc557-cf3e-453a-8485-d987ca457549\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b524e303b5baf95c0a7b7716975c1ab1755579270f3097d585466503c56a9c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de5ddc06e863eadae5ce325db8f0a2ba74088e8f27704d5a5029a4ab1f666d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b337f7427aa447a1d32260cf850cfee8550ef1926c808b4462f07f3142c49a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.623716 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.637152 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.650934 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.663826 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.677308 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.686304 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0abaca2-452b-4f7a-b6bf-5d564756ff0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df64de79d0a8a7383c4a9142c8c6949ca0c73de37c7d379879e869693520d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.690210 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.690235 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.690247 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.690261 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.690272 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:43Z","lastTransitionTime":"2026-02-17T17:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.696619 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.707644 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.717627 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.727982 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-psx9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1270b10e-8243-4940-b8a1-e0d36a0ef933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-psx9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.739360 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.748765 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.792467 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.792502 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.792513 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.792528 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.792539 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:43Z","lastTransitionTime":"2026-02-17T17:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.894762 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.894795 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.894807 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.894823 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.894836 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:43Z","lastTransitionTime":"2026-02-17T17:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.997196 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.997228 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.997237 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.997251 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:43 crc kubenswrapper[4680]: I0217 17:45:43.997262 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:43Z","lastTransitionTime":"2026-02-17T17:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.090752 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 20:57:13.493463419 +0000 UTC Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.099414 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.099453 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.099462 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.099476 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.099485 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:44Z","lastTransitionTime":"2026-02-17T17:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.201107 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.201156 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.201163 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.201176 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.201185 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:44Z","lastTransitionTime":"2026-02-17T17:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.303156 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.303191 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.303217 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.303231 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.303239 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:44Z","lastTransitionTime":"2026-02-17T17:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.405561 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.405597 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.405605 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.405618 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.405628 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:44Z","lastTransitionTime":"2026-02-17T17:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.503428 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovnkube-controller/3.log" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.506660 4680 scope.go:117] "RemoveContainer" containerID="c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af" Feb 17 17:45:44 crc kubenswrapper[4680]: E0217 17:45:44.506793 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.506889 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.506923 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.506932 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.506947 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.506956 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:44Z","lastTransitionTime":"2026-02-17T17:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.520124 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.531570 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.543161 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.554428 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.564045 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.575152 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff98cc275fec6841e32dc0655c3f31f7544792122c4a96cf4db0e51d52a4065d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:38Z\\\",\\\"message\\\":\\\"2026-02-17T17:44:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6ee96071-5538-48fb-83e2-059881b79f4b\\\\n2026-02-17T17:44:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6ee96071-5538-48fb-83e2-059881b79f4b to /host/opt/cni/bin/\\\\n2026-02-17T17:44:53Z [verbose] multus-daemon started\\\\n2026-02-17T17:44:53Z [verbose] Readiness Indicator file check\\\\n2026-02-17T17:45:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.590575 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.609008 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.609049 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.609060 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.609075 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.609086 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:44Z","lastTransitionTime":"2026-02-17T17:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.609028 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:42Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 17:45:42.895285 6615 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 17:45:42.895465 6615 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 17:45:42.895491 6615 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 17:45:42.895832 6615 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 17:45:42.896150 6615 factory.go:656] Stopping watch factory\\\\nI0217 17:45:42.945095 6615 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0217 17:45:42.945123 6615 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0217 17:45:42.945170 6615 ovnkube.go:599] Stopped ovnkube\\\\nI0217 17:45:42.945192 6615 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 17:45:42.945268 6615 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.621114 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.630405 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82bbc557-cf3e-453a-8485-d987ca457549\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b524e303b5baf95c0a7b7716975c1ab1755579270f3097d585466503c56a9c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de5ddc06e863eadae5ce325db8f0a2ba74088e8f27704d5a5029a4ab1f666d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b337f7427aa447a1d32260cf850cfee8550ef1926c808b4462f07f3142c49a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.651216 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.664668 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.674741 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.688524 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.698058 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0abaca2-452b-4f7a-b6bf-5d564756ff0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df64de79d0a8a7383c4a9142c8c6949ca0c73de37c7d379879e869693520d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.708973 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.711251 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.711278 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.711302 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.711330 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.711339 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:44Z","lastTransitionTime":"2026-02-17T17:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.721265 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.732491 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.742256 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-psx9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1270b10e-8243-4940-b8a1-e0d36a0ef933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-psx9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.813381 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.813421 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.813432 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.813448 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.813458 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:44Z","lastTransitionTime":"2026-02-17T17:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.915452 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.915481 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.915489 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.915522 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:44 crc kubenswrapper[4680]: I0217 17:45:44.915531 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:44Z","lastTransitionTime":"2026-02-17T17:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.018191 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.018237 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.018246 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.018263 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.018279 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:45Z","lastTransitionTime":"2026-02-17T17:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.091031 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 19:31:42.396186794 +0000 UTC Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.104372 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.104420 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.104389 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:45 crc kubenswrapper[4680]: E0217 17:45:45.104510 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.104524 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:45 crc kubenswrapper[4680]: E0217 17:45:45.104592 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:45 crc kubenswrapper[4680]: E0217 17:45:45.104654 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:45 crc kubenswrapper[4680]: E0217 17:45:45.104715 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.120830 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.120873 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.120882 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.120895 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.120905 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:45Z","lastTransitionTime":"2026-02-17T17:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.224266 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.224305 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.224337 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.224352 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.224363 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:45Z","lastTransitionTime":"2026-02-17T17:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.326446 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.326485 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.326500 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.326517 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.326530 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:45Z","lastTransitionTime":"2026-02-17T17:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.429210 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.429251 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.429262 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.429278 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.429290 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:45Z","lastTransitionTime":"2026-02-17T17:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.531198 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.531233 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.531257 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.531271 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.531280 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:45Z","lastTransitionTime":"2026-02-17T17:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.633715 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.633755 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.633765 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.633781 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.633792 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:45Z","lastTransitionTime":"2026-02-17T17:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.736135 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.736238 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.736266 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.736296 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.736347 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:45Z","lastTransitionTime":"2026-02-17T17:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.838519 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.838594 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.838613 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.838637 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.838655 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:45Z","lastTransitionTime":"2026-02-17T17:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.940491 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.940528 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.940544 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.940560 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:45 crc kubenswrapper[4680]: I0217 17:45:45.940570 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:45Z","lastTransitionTime":"2026-02-17T17:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.043382 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.043420 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.043433 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.043449 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.043461 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:46Z","lastTransitionTime":"2026-02-17T17:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.091627 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 19:16:34.526502684 +0000 UTC Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.145874 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.145928 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.145942 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.145958 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.145969 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:46Z","lastTransitionTime":"2026-02-17T17:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.248156 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.248221 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.248236 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.248250 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.248259 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:46Z","lastTransitionTime":"2026-02-17T17:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.351684 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.351733 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.351752 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.351781 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.351798 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:46Z","lastTransitionTime":"2026-02-17T17:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.453669 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.453734 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.453748 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.453770 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.453815 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:46Z","lastTransitionTime":"2026-02-17T17:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.556116 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.556175 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.556201 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.556248 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.556264 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:46Z","lastTransitionTime":"2026-02-17T17:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.658177 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.658207 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.658215 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.658227 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.658236 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:46Z","lastTransitionTime":"2026-02-17T17:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.760192 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.760243 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.760260 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.760275 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.760287 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:46Z","lastTransitionTime":"2026-02-17T17:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.862558 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.862598 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.862609 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.862623 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.862633 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:46Z","lastTransitionTime":"2026-02-17T17:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.965398 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.965487 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.965520 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.965555 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:46 crc kubenswrapper[4680]: I0217 17:45:46.965582 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:46Z","lastTransitionTime":"2026-02-17T17:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.068517 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.068554 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.068563 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.068575 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.068592 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:47Z","lastTransitionTime":"2026-02-17T17:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.092251 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 23:37:50.62202333 +0000 UTC Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.104731 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.104798 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.104820 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:47 crc kubenswrapper[4680]: E0217 17:45:47.104868 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.104746 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:47 crc kubenswrapper[4680]: E0217 17:45:47.104961 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:47 crc kubenswrapper[4680]: E0217 17:45:47.105007 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:47 crc kubenswrapper[4680]: E0217 17:45:47.105073 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.171571 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.171624 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.171642 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.171663 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.171680 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:47Z","lastTransitionTime":"2026-02-17T17:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.274272 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.274333 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.274350 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.274370 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.274384 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:47Z","lastTransitionTime":"2026-02-17T17:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.377287 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.377357 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.377369 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.377388 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.377401 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:47Z","lastTransitionTime":"2026-02-17T17:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.480511 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.480547 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.480558 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.480574 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.480588 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:47Z","lastTransitionTime":"2026-02-17T17:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.582288 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.582354 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.582366 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.582381 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.582391 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:47Z","lastTransitionTime":"2026-02-17T17:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.684483 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.684522 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.684534 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.684552 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.684565 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:47Z","lastTransitionTime":"2026-02-17T17:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.786527 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.786558 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.786565 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.786593 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.786603 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:47Z","lastTransitionTime":"2026-02-17T17:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.891076 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.891120 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.891131 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.891144 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.891154 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:47Z","lastTransitionTime":"2026-02-17T17:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.993348 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.993376 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.993386 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.993398 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:47 crc kubenswrapper[4680]: I0217 17:45:47.993408 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:47Z","lastTransitionTime":"2026-02-17T17:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.092643 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:20:18.897858866 +0000 UTC Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.095005 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.095036 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.095045 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.095058 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.095067 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:48Z","lastTransitionTime":"2026-02-17T17:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.197796 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.197842 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.197854 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.197870 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.197881 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:48Z","lastTransitionTime":"2026-02-17T17:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.299608 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.299655 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.299664 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.299980 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.300001 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:48Z","lastTransitionTime":"2026-02-17T17:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.402314 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.402364 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.402373 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.402386 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.402394 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:48Z","lastTransitionTime":"2026-02-17T17:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.504809 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.504861 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.504878 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.504898 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.504912 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:48Z","lastTransitionTime":"2026-02-17T17:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.606468 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.606506 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.606517 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.606529 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.606539 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:48Z","lastTransitionTime":"2026-02-17T17:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.708886 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.708935 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.708949 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.708966 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.708979 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:48Z","lastTransitionTime":"2026-02-17T17:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.810922 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.811245 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.811356 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.811443 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.811557 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:48Z","lastTransitionTime":"2026-02-17T17:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.914663 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.914711 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.914726 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.914747 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:48 crc kubenswrapper[4680]: I0217 17:45:48.914763 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:48Z","lastTransitionTime":"2026-02-17T17:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.016824 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.017095 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.017191 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.017282 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.017377 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:49Z","lastTransitionTime":"2026-02-17T17:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.093825 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:24:43.713374029 +0000 UTC Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.105339 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.105352 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:49 crc kubenswrapper[4680]: E0217 17:45:49.105779 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.105390 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:49 crc kubenswrapper[4680]: E0217 17:45:49.105786 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.105359 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:49 crc kubenswrapper[4680]: E0217 17:45:49.106030 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:49 crc kubenswrapper[4680]: E0217 17:45:49.105968 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.119692 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.119746 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.119763 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.119785 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.119801 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:49Z","lastTransitionTime":"2026-02-17T17:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.222241 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.222513 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.222587 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.222651 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.222712 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:49Z","lastTransitionTime":"2026-02-17T17:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.325469 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.325503 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.325536 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.325548 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.325556 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:49Z","lastTransitionTime":"2026-02-17T17:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.428008 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.428080 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.428093 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.428109 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.428125 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:49Z","lastTransitionTime":"2026-02-17T17:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.530411 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.530454 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.530464 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.530478 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.530487 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:49Z","lastTransitionTime":"2026-02-17T17:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.632578 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.632615 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.632626 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.632642 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.632652 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:49Z","lastTransitionTime":"2026-02-17T17:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.734954 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.735006 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.735023 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.735046 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.735062 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:49Z","lastTransitionTime":"2026-02-17T17:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.836799 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.836836 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.836848 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.836862 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.836872 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:49Z","lastTransitionTime":"2026-02-17T17:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.939393 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.939456 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.939474 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.939499 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:49 crc kubenswrapper[4680]: I0217 17:45:49.939516 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:49Z","lastTransitionTime":"2026-02-17T17:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.042175 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.042243 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.042256 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.042274 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.042286 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:50Z","lastTransitionTime":"2026-02-17T17:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.094996 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 17:33:16.48294305 +0000 UTC Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.147194 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.147482 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.147563 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.147646 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.147716 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:50Z","lastTransitionTime":"2026-02-17T17:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.250215 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.250259 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.250271 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.250288 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.250301 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:50Z","lastTransitionTime":"2026-02-17T17:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.352480 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.352510 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.352521 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.352536 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.352546 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:50Z","lastTransitionTime":"2026-02-17T17:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.454961 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.455023 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.455045 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.455075 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.455097 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:50Z","lastTransitionTime":"2026-02-17T17:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.557052 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.557278 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.557378 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.557466 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.557665 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:50Z","lastTransitionTime":"2026-02-17T17:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.660674 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.660739 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.660752 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.660771 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.660822 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:50Z","lastTransitionTime":"2026-02-17T17:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.763563 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.763621 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.763631 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.763648 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.763660 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:50Z","lastTransitionTime":"2026-02-17T17:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.866704 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.866737 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.866750 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.866766 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.866777 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:50Z","lastTransitionTime":"2026-02-17T17:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.972569 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.972624 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.972636 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.972654 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:50 crc kubenswrapper[4680]: I0217 17:45:50.972667 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:50Z","lastTransitionTime":"2026-02-17T17:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.074753 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.074784 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.074791 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.074803 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.074812 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:51Z","lastTransitionTime":"2026-02-17T17:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.095939 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 04:01:13.928802242 +0000 UTC Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.104216 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.104368 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:51 crc kubenswrapper[4680]: E0217 17:45:51.104377 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.104440 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.104601 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:51 crc kubenswrapper[4680]: E0217 17:45:51.104613 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:51 crc kubenswrapper[4680]: E0217 17:45:51.104850 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:51 crc kubenswrapper[4680]: E0217 17:45:51.104894 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.118178 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-psx9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1270b10e-8243-4940-b8a1-e0d36a0ef933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-psx9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.128456 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0abaca2-452b-4f7a-b6bf-5d564756ff0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df64de79d0a8a7383c4a9142c8c6949ca0c73de37c7d379879e869693520d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.141414 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.153917 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.163470 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.177130 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.177158 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.177166 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.177179 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.177188 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:51Z","lastTransitionTime":"2026-02-17T17:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.178224 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.193109 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.205649 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.219885 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.240674 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.259667 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff98cc275fec6841e32dc0655c3f31f7544792122c4a96cf4db0e51d52a4065d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:38Z\\\",\\\"message\\\":\\\"2026-02-17T17:44:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6ee96071-5538-48fb-83e2-059881b79f4b\\\\n2026-02-17T17:44:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6ee96071-5538-48fb-83e2-059881b79f4b to /host/opt/cni/bin/\\\\n2026-02-17T17:44:53Z [verbose] multus-daemon started\\\\n2026-02-17T17:44:53Z [verbose] Readiness Indicator file check\\\\n2026-02-17T17:45:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.274100 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.279621 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.279680 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.279693 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.279708 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.279719 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:51Z","lastTransitionTime":"2026-02-17T17:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.288545 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.304956 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.326400 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:42Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 17:45:42.895285 6615 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 17:45:42.895465 6615 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 17:45:42.895491 6615 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 17:45:42.895832 6615 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 17:45:42.896150 6615 factory.go:656] Stopping watch factory\\\\nI0217 17:45:42.945095 6615 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0217 17:45:42.945123 6615 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0217 17:45:42.945170 6615 ovnkube.go:599] Stopped ovnkube\\\\nI0217 17:45:42.945192 6615 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 17:45:42.945268 6615 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.339706 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.351085 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82bbc557-cf3e-453a-8485-d987ca457549\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b524e303b5baf95c0a7b7716975c1ab1755579270f3097d585466503c56a9c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de5ddc06e863eadae5ce325db8f0a2ba74088e8f27704d5a5029a4ab1f666d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b337f7427aa447a1d32260cf850cfee8550ef1926c808b4462f07f3142c49a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.371533 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.381972 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.382183 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.382257 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.382349 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.382448 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:51Z","lastTransitionTime":"2026-02-17T17:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.386497 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.485500 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.485666 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.485748 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.485819 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.485902 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:51Z","lastTransitionTime":"2026-02-17T17:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.588053 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.588114 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.588132 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.588152 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.588166 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:51Z","lastTransitionTime":"2026-02-17T17:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.690763 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.690803 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.690814 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.690828 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.690840 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:51Z","lastTransitionTime":"2026-02-17T17:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.793181 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.793218 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.793228 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.793242 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.793251 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:51Z","lastTransitionTime":"2026-02-17T17:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.895790 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.895831 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.895841 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.895855 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.895865 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:51Z","lastTransitionTime":"2026-02-17T17:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.998462 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.998512 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.998528 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.998549 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:51 crc kubenswrapper[4680]: I0217 17:45:51.998564 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:51Z","lastTransitionTime":"2026-02-17T17:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.008902 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.008944 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.008954 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.008966 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.008976 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:52Z","lastTransitionTime":"2026-02-17T17:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:52 crc kubenswrapper[4680]: E0217 17:45:52.020686 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.024716 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.024890 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.024959 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.025043 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.025105 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:52Z","lastTransitionTime":"2026-02-17T17:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:52 crc kubenswrapper[4680]: E0217 17:45:52.037431 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.041565 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.041693 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.041710 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.041731 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.041743 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:52Z","lastTransitionTime":"2026-02-17T17:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:52 crc kubenswrapper[4680]: E0217 17:45:52.053971 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.057580 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.057906 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.057990 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.058068 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.058131 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:52Z","lastTransitionTime":"2026-02-17T17:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:52 crc kubenswrapper[4680]: E0217 17:45:52.069311 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.073031 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.073064 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.073073 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.073107 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.073117 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:52Z","lastTransitionTime":"2026-02-17T17:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:52 crc kubenswrapper[4680]: E0217 17:45:52.084711 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:45:52Z is after 2025-08-24T17:21:41Z" Feb 17 17:45:52 crc kubenswrapper[4680]: E0217 17:45:52.084943 4680 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.096767 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:25:47.71139228 +0000 UTC Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.100614 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.100661 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.100672 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.100686 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.100698 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:52Z","lastTransitionTime":"2026-02-17T17:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.203533 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.203569 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.203577 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.203590 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.203600 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:52Z","lastTransitionTime":"2026-02-17T17:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.307158 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.307194 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.307205 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.307220 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.307230 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:52Z","lastTransitionTime":"2026-02-17T17:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.409747 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.410042 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.410170 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.410285 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.410418 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:52Z","lastTransitionTime":"2026-02-17T17:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.512981 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.513011 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.513021 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.513034 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.513044 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:52Z","lastTransitionTime":"2026-02-17T17:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.615496 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.615540 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.615549 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.615561 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.615570 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:52Z","lastTransitionTime":"2026-02-17T17:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.717835 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.717872 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.717882 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.717897 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.717906 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:52Z","lastTransitionTime":"2026-02-17T17:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.820475 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.820517 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.820528 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.820546 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.820557 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:52Z","lastTransitionTime":"2026-02-17T17:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.922099 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.922131 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.922141 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.922163 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:52 crc kubenswrapper[4680]: I0217 17:45:52.922178 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:52Z","lastTransitionTime":"2026-02-17T17:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.024788 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.024826 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.024837 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.024852 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.024863 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:53Z","lastTransitionTime":"2026-02-17T17:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.097645 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:15:54.690136388 +0000 UTC Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.104181 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:53 crc kubenswrapper[4680]: E0217 17:45:53.104284 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.104442 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.104454 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.104496 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:53 crc kubenswrapper[4680]: E0217 17:45:53.104555 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:53 crc kubenswrapper[4680]: E0217 17:45:53.104620 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:53 crc kubenswrapper[4680]: E0217 17:45:53.104673 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.127517 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.127550 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.127559 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.127571 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.127580 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:53Z","lastTransitionTime":"2026-02-17T17:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.230032 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.230126 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.230177 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.230208 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.230265 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:53Z","lastTransitionTime":"2026-02-17T17:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.332630 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.332695 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.332710 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.332724 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.332734 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:53Z","lastTransitionTime":"2026-02-17T17:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.435354 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.435393 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.435408 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.435424 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.435434 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:53Z","lastTransitionTime":"2026-02-17T17:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.537664 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.537705 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.537722 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.537738 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.537749 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:53Z","lastTransitionTime":"2026-02-17T17:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.640368 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.640413 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.640429 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.640449 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.640462 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:53Z","lastTransitionTime":"2026-02-17T17:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.742107 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.742465 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.742615 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.742748 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.743029 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:53Z","lastTransitionTime":"2026-02-17T17:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.846148 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.846177 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.846185 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.846197 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.846206 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:53Z","lastTransitionTime":"2026-02-17T17:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.948239 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.948299 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.948310 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.948344 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:53 crc kubenswrapper[4680]: I0217 17:45:53.948356 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:53Z","lastTransitionTime":"2026-02-17T17:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.040507 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:45:54 crc kubenswrapper[4680]: E0217 17:45:54.040596 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:58.040577828 +0000 UTC m=+147.591476507 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.051451 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.051484 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.051529 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.051549 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.051560 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:54Z","lastTransitionTime":"2026-02-17T17:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.098309 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 08:23:36.168476358 +0000 UTC Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.141118 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.141166 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.141191 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.141209 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:54 crc kubenswrapper[4680]: E0217 17:45:54.141212 4680 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 17:45:54 crc kubenswrapper[4680]: E0217 17:45:54.141290 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 17:46:58.141270116 +0000 UTC m=+147.692168795 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 17:45:54 crc kubenswrapper[4680]: E0217 17:45:54.141313 4680 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 17:45:54 crc kubenswrapper[4680]: E0217 17:45:54.141360 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 17:45:54 crc kubenswrapper[4680]: E0217 17:45:54.141381 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 17:45:54 crc kubenswrapper[4680]: E0217 17:45:54.141394 4680 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:45:54 crc kubenswrapper[4680]: E0217 17:45:54.141394 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 17:45:54 crc kubenswrapper[4680]: E0217 17:45:54.141425 4680 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 17:45:54 crc kubenswrapper[4680]: E0217 17:45:54.141437 4680 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:45:54 crc kubenswrapper[4680]: E0217 17:45:54.141411 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 17:46:58.1413971 +0000 UTC m=+147.692295779 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 17:45:54 crc kubenswrapper[4680]: E0217 17:45:54.141488 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 17:46:58.141462412 +0000 UTC m=+147.692361081 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:45:54 crc kubenswrapper[4680]: E0217 17:45:54.141500 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 17:46:58.141495593 +0000 UTC m=+147.692394272 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.153357 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.153395 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.153429 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.153446 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.153459 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:54Z","lastTransitionTime":"2026-02-17T17:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.255892 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.255924 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.255963 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.255976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.255985 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:54Z","lastTransitionTime":"2026-02-17T17:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.358374 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.358416 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.358427 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.358441 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.358451 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:54Z","lastTransitionTime":"2026-02-17T17:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.461629 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.461905 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.461916 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.461930 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.461942 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:54Z","lastTransitionTime":"2026-02-17T17:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.563748 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.563817 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.563828 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.563844 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.563855 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:54Z","lastTransitionTime":"2026-02-17T17:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.665705 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.665740 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.665749 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.665761 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.665772 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:54Z","lastTransitionTime":"2026-02-17T17:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.768222 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.768261 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.768270 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.768285 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.768295 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:54Z","lastTransitionTime":"2026-02-17T17:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.870967 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.871244 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.871474 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.871771 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.872042 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:54Z","lastTransitionTime":"2026-02-17T17:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.974819 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.975098 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.975171 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.975234 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:54 crc kubenswrapper[4680]: I0217 17:45:54.975301 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:54Z","lastTransitionTime":"2026-02-17T17:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.077406 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.077451 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.077462 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.077480 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.077494 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:55Z","lastTransitionTime":"2026-02-17T17:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.099345 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 05:55:28.264955338 +0000 UTC Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.104723 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.104722 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.104727 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:55 crc kubenswrapper[4680]: E0217 17:45:55.105011 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.104873 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:55 crc kubenswrapper[4680]: E0217 17:45:55.105082 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:55 crc kubenswrapper[4680]: E0217 17:45:55.104827 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:55 crc kubenswrapper[4680]: E0217 17:45:55.104941 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.180613 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.181105 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.181188 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.181260 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.181356 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:55Z","lastTransitionTime":"2026-02-17T17:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.283273 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.283475 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.283485 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.283497 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.283506 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:55Z","lastTransitionTime":"2026-02-17T17:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.385487 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.385780 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.385845 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.385913 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.385974 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:55Z","lastTransitionTime":"2026-02-17T17:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.489522 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.489958 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.490046 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.490125 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.490226 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:55Z","lastTransitionTime":"2026-02-17T17:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.594624 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.594656 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.594666 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.594680 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.594690 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:55Z","lastTransitionTime":"2026-02-17T17:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.697118 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.697152 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.697161 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.697177 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.697186 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:55Z","lastTransitionTime":"2026-02-17T17:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.799592 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.799651 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.799662 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.799697 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.799710 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:55Z","lastTransitionTime":"2026-02-17T17:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.902264 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.902578 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.902674 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.902800 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:55 crc kubenswrapper[4680]: I0217 17:45:55.902897 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:55Z","lastTransitionTime":"2026-02-17T17:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.005619 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.005679 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.005693 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.005717 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.005735 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:56Z","lastTransitionTime":"2026-02-17T17:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.103464 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:43:19.415715847 +0000 UTC Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.108635 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.108780 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.108850 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.108909 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.108972 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:56Z","lastTransitionTime":"2026-02-17T17:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.210938 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.210980 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.210992 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.211009 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.211020 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:56Z","lastTransitionTime":"2026-02-17T17:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.312850 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.313120 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.313279 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.313423 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.313522 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:56Z","lastTransitionTime":"2026-02-17T17:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.416224 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.416477 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.416551 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.416617 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.416682 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:56Z","lastTransitionTime":"2026-02-17T17:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.519298 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.519345 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.519358 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.519373 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.519383 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:56Z","lastTransitionTime":"2026-02-17T17:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.621528 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.621568 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.621579 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.621595 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.621608 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:56Z","lastTransitionTime":"2026-02-17T17:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.724644 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.724690 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.724700 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.724718 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.724729 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:56Z","lastTransitionTime":"2026-02-17T17:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.827560 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.827610 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.827622 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.827640 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.827651 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:56Z","lastTransitionTime":"2026-02-17T17:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.930562 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.930647 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.930664 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.930700 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:56 crc kubenswrapper[4680]: I0217 17:45:56.930722 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:56Z","lastTransitionTime":"2026-02-17T17:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.033593 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.033640 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.033650 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.033667 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.033680 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:57Z","lastTransitionTime":"2026-02-17T17:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.103647 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:55:07.664783582 +0000 UTC Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.104924 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.105002 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.105101 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:57 crc kubenswrapper[4680]: E0217 17:45:57.105093 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:57 crc kubenswrapper[4680]: E0217 17:45:57.105222 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:57 crc kubenswrapper[4680]: E0217 17:45:57.105337 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.105417 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:57 crc kubenswrapper[4680]: E0217 17:45:57.105491 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.106089 4680 scope.go:117] "RemoveContainer" containerID="c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af" Feb 17 17:45:57 crc kubenswrapper[4680]: E0217 17:45:57.106231 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.135680 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.135718 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.135727 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.135741 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.135751 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:57Z","lastTransitionTime":"2026-02-17T17:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.238197 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.238239 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.238252 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.238302 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.238344 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:57Z","lastTransitionTime":"2026-02-17T17:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.340508 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.340561 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.340573 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.340589 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.340939 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:57Z","lastTransitionTime":"2026-02-17T17:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.443023 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.443076 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.443099 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.443126 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.443146 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:57Z","lastTransitionTime":"2026-02-17T17:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.545076 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.545116 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.545128 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.545143 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.545155 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:57Z","lastTransitionTime":"2026-02-17T17:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.648101 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.648164 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.648182 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.648206 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.648224 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:57Z","lastTransitionTime":"2026-02-17T17:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.750882 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.751233 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.751371 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.751450 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.751552 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:57Z","lastTransitionTime":"2026-02-17T17:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.853571 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.853598 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.853607 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.853621 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.853630 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:57Z","lastTransitionTime":"2026-02-17T17:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.955647 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.955727 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.955752 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.955783 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:57 crc kubenswrapper[4680]: I0217 17:45:57.955805 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:57Z","lastTransitionTime":"2026-02-17T17:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.058364 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.058396 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.058405 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.058417 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.058426 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:58Z","lastTransitionTime":"2026-02-17T17:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.104496 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 22:34:22.147572545 +0000 UTC Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.160445 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.160506 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.160522 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.160540 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.160556 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:58Z","lastTransitionTime":"2026-02-17T17:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.263306 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.263581 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.263655 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.263729 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.263787 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:58Z","lastTransitionTime":"2026-02-17T17:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.366342 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.366379 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.366390 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.366404 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.366413 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:58Z","lastTransitionTime":"2026-02-17T17:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.468795 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.468832 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.468844 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.468858 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.468867 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:58Z","lastTransitionTime":"2026-02-17T17:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.570951 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.570989 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.570998 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.571010 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.571019 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:58Z","lastTransitionTime":"2026-02-17T17:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.672796 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.672850 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.672864 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.672879 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.672889 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:58Z","lastTransitionTime":"2026-02-17T17:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.774892 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.774929 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.774938 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.774952 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.774961 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:58Z","lastTransitionTime":"2026-02-17T17:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.877187 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.877231 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.877245 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.877258 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.877269 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:58Z","lastTransitionTime":"2026-02-17T17:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.979995 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.980252 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.980368 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.980505 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:58 crc kubenswrapper[4680]: I0217 17:45:58.980601 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:58Z","lastTransitionTime":"2026-02-17T17:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.082494 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.082547 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.082558 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.082575 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.082586 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:59Z","lastTransitionTime":"2026-02-17T17:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.105105 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 02:22:36.46715987 +0000 UTC Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.105392 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.105229 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.105271 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:45:59 crc kubenswrapper[4680]: E0217 17:45:59.105639 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:45:59 crc kubenswrapper[4680]: E0217 17:45:59.105498 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:45:59 crc kubenswrapper[4680]: E0217 17:45:59.105707 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.105229 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:45:59 crc kubenswrapper[4680]: E0217 17:45:59.105803 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.185389 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.185443 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.185462 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.185488 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.185505 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:59Z","lastTransitionTime":"2026-02-17T17:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.293205 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.293240 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.293270 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.293283 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.293292 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:59Z","lastTransitionTime":"2026-02-17T17:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.395083 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.395177 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.395187 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.395199 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.395208 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:59Z","lastTransitionTime":"2026-02-17T17:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.498093 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.498154 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.498171 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.498194 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.498210 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:59Z","lastTransitionTime":"2026-02-17T17:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.600580 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.601031 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.601137 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.601233 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.601339 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:59Z","lastTransitionTime":"2026-02-17T17:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.703627 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.703660 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.703669 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.703708 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.703718 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:59Z","lastTransitionTime":"2026-02-17T17:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.806125 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.806189 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.806210 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.806233 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.806250 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:59Z","lastTransitionTime":"2026-02-17T17:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.908651 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.908807 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.908824 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.909180 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:45:59 crc kubenswrapper[4680]: I0217 17:45:59.909476 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:45:59Z","lastTransitionTime":"2026-02-17T17:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.011624 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.011661 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.011673 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.011688 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.011706 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:00Z","lastTransitionTime":"2026-02-17T17:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.105756 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:36:57.629928279 +0000 UTC Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.115547 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.115598 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.115609 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.115624 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.115635 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:00Z","lastTransitionTime":"2026-02-17T17:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.218248 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.218286 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.218294 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.218308 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.218333 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:00Z","lastTransitionTime":"2026-02-17T17:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.321006 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.321417 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.321535 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.321649 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.321759 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:00Z","lastTransitionTime":"2026-02-17T17:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.424240 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.424564 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.424664 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.424750 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.424832 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:00Z","lastTransitionTime":"2026-02-17T17:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.526705 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.527033 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.527166 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.527288 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.527418 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:00Z","lastTransitionTime":"2026-02-17T17:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.629457 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.629496 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.629509 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.629525 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.629536 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:00Z","lastTransitionTime":"2026-02-17T17:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.731782 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.731828 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.731840 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.731856 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.731869 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:00Z","lastTransitionTime":"2026-02-17T17:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.833835 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.833873 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.833884 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.833897 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.833908 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:00Z","lastTransitionTime":"2026-02-17T17:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.935815 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.936188 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.936417 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.936616 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:00 crc kubenswrapper[4680]: I0217 17:46:00.936766 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:00Z","lastTransitionTime":"2026-02-17T17:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.039345 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.039376 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.039384 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.039398 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.039407 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:01Z","lastTransitionTime":"2026-02-17T17:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.105192 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.105607 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.105731 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:46:01 crc kubenswrapper[4680]: E0217 17:46:01.105732 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:46:01 crc kubenswrapper[4680]: E0217 17:46:01.105817 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.106132 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:01 crc kubenswrapper[4680]: E0217 17:46:01.106200 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.106221 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 12:34:27.519120131 +0000 UTC Feb 17 17:46:01 crc kubenswrapper[4680]: E0217 17:46:01.106436 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.119257 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78e096552540d52b81ee8329e6a3642374004e6b3fbd46534a329f71ef4ee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://564debf7a930424d6758e55b00b41c12d88d0748440794669217fe98fc723042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.130455 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b01faa4179614af67793369ab2d413f169e590ed93a02c24fc553db31cee333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.141167 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c2f672e-82df-4d1a-9ce7-a76595c08a27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5265823fddc28db144a7df7c6458c89d279be832a020758d0f081a36af2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d423bdc5590605117c90d7c3ebf60862e980a9ceb1dcc86f9f2a0fe53b956bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5pss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-knlwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.141999 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.142031 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.142044 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.142060 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.142071 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:01Z","lastTransitionTime":"2026-02-17T17:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.150232 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-psx9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1270b10e-8243-4940-b8a1-e0d36a0ef933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66m8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:45:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-psx9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.160427 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0abaca2-452b-4f7a-b6bf-5d564756ff0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df64de79d0a8a7383c4a9142c8c6949ca0c73de37c7d379879e869693520d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d465ecc6cfa0a120058f1c30c7ca24861e08b7711f3c7e00cee468ab568283a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.172743 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xpqrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f833b509-7a9e-4d56-be02-43df3d11b5ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86675ddd7e142b026885dcf74f84b8cd7c320910e31138c9802664f544c1b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxjq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xpqrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.185867 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.197513 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb97af76f990b62854791e7479e4168b2b3d330b171988de190f645741eccd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.208023 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.219212 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t2c5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c056df58-0160-4cf6-ae0c-dc7095c8be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff98cc275fec6841e32dc0655c3f31f7544792122c4a96cf4db0e51d52a4065d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:38Z\\\",\\\"message\\\":\\\"2026-02-17T17:44:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6ee96071-5538-48fb-83e2-059881b79f4b\\\\n2026-02-17T17:44:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6ee96071-5538-48fb-83e2-059881b79f4b to /host/opt/cni/bin/\\\\n2026-02-17T17:44:53Z [verbose] multus-daemon started\\\\n2026-02-17T17:44:53Z [verbose] Readiness Indicator file check\\\\n2026-02-17T17:45:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv9pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t2c5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.232082 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e5e24bd-d23b-4c8f-ba86-76e64afcd012\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T17:44:44Z\\\",\\\"message\\\":\\\"W0217 17:44:34.219226 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 17:44:34.219530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771350274 cert, and key in /tmp/serving-cert-4096490625/serving-signer.crt, /tmp/serving-cert-4096490625/serving-signer.key\\\\nI0217 17:44:34.508531 1 observer_polling.go:159] Starting file observer\\\\nW0217 17:44:34.511488 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 17:44:34.511680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 17:44:34.513675 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4096490625/tls.crt::/tmp/serving-cert-4096490625/tls.key\\\\\\\"\\\\nF0217 17:44:44.872167 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.243748 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82bbc557-cf3e-453a-8485-d987ca457549\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:45:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b524e303b5baf95c0a7b7716975c1ab1755579270f3097d585466503c56a9c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de5ddc06e863eadae5ce325db8f0a2ba74088e8f27704d5a5029a4ab1f666d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b337f7427aa447a1d32260cf850cfee8550ef1926c808b4462f07f3142c49a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34916ef8a56d46d0bd474e801cdfe2075ecd1d3c9d2d0ff7100d2631d5208b84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.244357 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.244475 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.244545 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.244608 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.244670 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:01Z","lastTransitionTime":"2026-02-17T17:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.266038 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d082317-b7b6-456f-8a6e-4824bb264ffd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b2f0c7e964fe5845bc4ba7e94c5e947524003ef995fa4275ad94fa9ed30c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7670a97c0e0af19970124e3466324b53d6a90ed83984c035733e5b80096a1a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97fd85c0faedf90edc841af7367a6108d1f3edf4900009214a86da350b34529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c12c613dec1f633cee41c47f9286eaac20abe4df2680d53a3421a2d30c4303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f447a89256dce3c9c818e37ad0ac00c5c5cc530898f333c5bdb91e50f904ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b41924517d5ac32cdf495c2f6269e243076e820529929cbe773f1c144bea2102\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b72fec323319c770043f6257b7d7c36b953a10c5f6c640490a4ceb33b204642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91df927da378f7c4ec0e7df2ff50ad00ec205bee7af1f4e422b237f2393d5c55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.277896 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.286600 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5ckv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68c51268-d876-4559-a54d-46984c40743c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d0e66e07b52e11243d52561eeff1bb218342384de06c718cc19dd90ee8ce69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqfxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5ckv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.295573 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd08058-016e-4607-8be6-40463674c2fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f2ea094d7dad2940386d7c97726311a18b29d6cc207f2183570f99f50450f40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89b9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfw56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.308187 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63b718d8-11fb-4613-af5a-d48392570e7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d46aa6668cf75e221b090abe55711c83759d52b5a278a3296b6d7310ff6b0861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://348e95b68b9b7e6a454e15877c382bc97e3a6c71778731912036b216cfaff383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c8313b9a0692f454ce87617faceacf9e6a2e8bc5aa87ff76c32e2ec36d2767f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a710142bde1a66246c399720d5c9d8386d65e0bd7990a85a3bfe5e52dc7eec9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f647a0935ceb7310d3dce10d3d1a633cb97bc4ecba8bc366a201e11ba1619702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef8c7cfcf42e0912e466a7263e96a103ee80af4d9f933748f350a0babd5ac2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df6c8171b1cb01febf36af73475350cb730d345df95ade591adfe5c7b9935268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjp7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdjhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.324747 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2733b1c1-6f98-4184-b59b-123435eeb569\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T17:45:42Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 17:45:42.895285 6615 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 17:45:42.895465 6615 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 17:45:42.895491 6615 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 17:45:42.895832 6615 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 17:45:42.896150 6615 factory.go:656] Stopping watch factory\\\\nI0217 17:45:42.945095 6615 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0217 17:45:42.945123 6615 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0217 17:45:42.945170 6615 ovnkube.go:599] Stopped ovnkube\\\\nI0217 17:45:42.945192 6615 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 17:45:42.945268 6615 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T17:45:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T17:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T17:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vx4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.337780 4680 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07783092-30f5-44f3-b2ca-8d3fc44309c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T17:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a8d63c15ce8035756a23a72b22fb029e2da2ae56c79009616f525f0dcce0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c750e3d2c6c0c352c898c635ee204c4847daff97fd35c6c0b82f6a8a5ad55d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc092265c54ad900e19f2895b0e9336040dc688d830fc5b16dcb2df42e5df7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T17:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T17:44:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.347863 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.347977 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.348045 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.348113 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.348176 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:01Z","lastTransitionTime":"2026-02-17T17:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.451209 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.451262 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.451274 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.451295 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.451307 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:01Z","lastTransitionTime":"2026-02-17T17:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.554057 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.554369 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.554489 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.554563 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.554634 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:01Z","lastTransitionTime":"2026-02-17T17:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.656117 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.656152 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.656162 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.656176 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.656190 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:01Z","lastTransitionTime":"2026-02-17T17:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.758709 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.758743 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.758751 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.758764 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.758772 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:01Z","lastTransitionTime":"2026-02-17T17:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.860885 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.860923 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.860931 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.860945 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.860954 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:01Z","lastTransitionTime":"2026-02-17T17:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.963711 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.963782 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.963800 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.963829 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:01 crc kubenswrapper[4680]: I0217 17:46:01.963844 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:01Z","lastTransitionTime":"2026-02-17T17:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.066835 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.066896 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.066912 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.066934 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.066949 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:02Z","lastTransitionTime":"2026-02-17T17:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.106678 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 03:24:14.02522781 +0000 UTC Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.116826 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.116882 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.116893 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.116914 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.116931 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:02Z","lastTransitionTime":"2026-02-17T17:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:02 crc kubenswrapper[4680]: E0217 17:46:02.138278 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.144125 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.144309 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.144536 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.144669 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.144926 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:02Z","lastTransitionTime":"2026-02-17T17:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:02 crc kubenswrapper[4680]: E0217 17:46:02.162247 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.166982 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.167023 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.167035 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.167053 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.167065 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:02Z","lastTransitionTime":"2026-02-17T17:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:02 crc kubenswrapper[4680]: E0217 17:46:02.181204 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.184914 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.184955 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.184963 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.184976 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.184985 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:02Z","lastTransitionTime":"2026-02-17T17:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:02 crc kubenswrapper[4680]: E0217 17:46:02.199950 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.203704 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.203740 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.203752 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.203768 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.203780 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:02Z","lastTransitionTime":"2026-02-17T17:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:02 crc kubenswrapper[4680]: E0217 17:46:02.219798 4680 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T17:46:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"03bcc060-8056-40d5-9505-0ca683e24302\\\",\\\"systemUUID\\\":\\\"3f55a8ca-57de-4e9a-afc9-858ba7e46832\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T17:46:02Z is after 2025-08-24T17:21:41Z" Feb 17 17:46:02 crc kubenswrapper[4680]: E0217 17:46:02.220018 4680 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.221682 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.221714 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.221727 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.221743 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.221755 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:02Z","lastTransitionTime":"2026-02-17T17:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.325308 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.325346 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.325355 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.325367 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.325376 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:02Z","lastTransitionTime":"2026-02-17T17:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.427748 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.427780 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.427792 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.427806 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.427815 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:02Z","lastTransitionTime":"2026-02-17T17:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.530030 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.530070 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.530082 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.530099 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.530110 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:02Z","lastTransitionTime":"2026-02-17T17:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.631741 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.631780 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.631797 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.631814 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.631825 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:02Z","lastTransitionTime":"2026-02-17T17:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.733691 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.733959 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.734089 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.734191 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.734297 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:02Z","lastTransitionTime":"2026-02-17T17:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.836956 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.837210 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.837291 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.837389 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.837512 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:02Z","lastTransitionTime":"2026-02-17T17:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.939731 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.939786 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.939797 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.939811 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:02 crc kubenswrapper[4680]: I0217 17:46:02.939820 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:02Z","lastTransitionTime":"2026-02-17T17:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.041888 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.041920 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.041928 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.041942 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.041951 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:03Z","lastTransitionTime":"2026-02-17T17:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.104972 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.105061 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.105121 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.105140 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:03 crc kubenswrapper[4680]: E0217 17:46:03.105147 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:46:03 crc kubenswrapper[4680]: E0217 17:46:03.105214 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:46:03 crc kubenswrapper[4680]: E0217 17:46:03.105266 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:46:03 crc kubenswrapper[4680]: E0217 17:46:03.105382 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.107177 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 20:01:05.392441168 +0000 UTC Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.144290 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.144334 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.144342 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.144354 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.144363 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:03Z","lastTransitionTime":"2026-02-17T17:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.246773 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.247473 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.247595 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.247723 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.247826 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:03Z","lastTransitionTime":"2026-02-17T17:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.349962 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.350179 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.350248 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.350332 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.350439 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:03Z","lastTransitionTime":"2026-02-17T17:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.453723 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.453772 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.453783 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.453799 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.453810 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:03Z","lastTransitionTime":"2026-02-17T17:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.556567 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.557300 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.557458 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.557657 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.557795 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:03Z","lastTransitionTime":"2026-02-17T17:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.660825 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.661258 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.661415 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.661493 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.661572 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:03Z","lastTransitionTime":"2026-02-17T17:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.764529 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.764615 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.764629 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.764643 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.764653 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:03Z","lastTransitionTime":"2026-02-17T17:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.867828 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.867859 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.867867 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.867900 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.867909 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:03Z","lastTransitionTime":"2026-02-17T17:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.970128 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.970174 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.970191 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.970212 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:03 crc kubenswrapper[4680]: I0217 17:46:03.970228 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:03Z","lastTransitionTime":"2026-02-17T17:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.072891 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.072941 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.072955 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.072973 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.072986 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:04Z","lastTransitionTime":"2026-02-17T17:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.107597 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:40:45.184079744 +0000 UTC Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.175027 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.175288 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.175449 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.175562 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.175640 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:04Z","lastTransitionTime":"2026-02-17T17:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.278655 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.279333 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.279443 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.279532 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.279603 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:04Z","lastTransitionTime":"2026-02-17T17:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.381782 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.381817 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.381826 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.381838 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.381847 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:04Z","lastTransitionTime":"2026-02-17T17:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.484469 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.484931 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.485074 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.485209 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.485302 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:04Z","lastTransitionTime":"2026-02-17T17:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.588047 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.588105 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.588118 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.588135 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.588147 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:04Z","lastTransitionTime":"2026-02-17T17:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.690202 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.690269 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.690285 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.690304 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.690341 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:04Z","lastTransitionTime":"2026-02-17T17:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.792986 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.793035 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.793242 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.793277 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.793293 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:04Z","lastTransitionTime":"2026-02-17T17:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.895728 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.895758 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.895767 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.895778 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.895786 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:04Z","lastTransitionTime":"2026-02-17T17:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.998397 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.998456 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.998471 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.998485 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:04 crc kubenswrapper[4680]: I0217 17:46:04.998495 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:04Z","lastTransitionTime":"2026-02-17T17:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.101126 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.101161 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.101174 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.101190 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.101204 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:05Z","lastTransitionTime":"2026-02-17T17:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.104338 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.104393 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:46:05 crc kubenswrapper[4680]: E0217 17:46:05.104437 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.104452 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.104495 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:46:05 crc kubenswrapper[4680]: E0217 17:46:05.104577 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:46:05 crc kubenswrapper[4680]: E0217 17:46:05.104731 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:46:05 crc kubenswrapper[4680]: E0217 17:46:05.104846 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.108425 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:50:51.410896892 +0000 UTC Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.204049 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.204118 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.204134 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.204156 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.204173 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:05Z","lastTransitionTime":"2026-02-17T17:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.306295 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.306930 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.307042 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.307129 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.307209 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:05Z","lastTransitionTime":"2026-02-17T17:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.409394 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.409449 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.409463 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.409477 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.409507 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:05Z","lastTransitionTime":"2026-02-17T17:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.511783 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.511843 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.511853 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.511866 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.511875 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:05Z","lastTransitionTime":"2026-02-17T17:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.618648 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.618867 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.619030 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.619148 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.619322 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:05Z","lastTransitionTime":"2026-02-17T17:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.720838 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.721108 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.721230 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.721340 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.721460 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:05Z","lastTransitionTime":"2026-02-17T17:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.824211 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.824444 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.824539 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.824608 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.824667 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:05Z","lastTransitionTime":"2026-02-17T17:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.926887 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.927152 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.927234 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.927324 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:05 crc kubenswrapper[4680]: I0217 17:46:05.927425 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:05Z","lastTransitionTime":"2026-02-17T17:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.029978 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.030016 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.030026 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.030039 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.030050 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:06Z","lastTransitionTime":"2026-02-17T17:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.109027 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:10:12.924986836 +0000 UTC Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.132681 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.132712 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.132720 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.132732 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.132740 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:06Z","lastTransitionTime":"2026-02-17T17:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.234893 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.235155 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.235265 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.235367 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.235463 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:06Z","lastTransitionTime":"2026-02-17T17:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.337308 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.337810 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.337941 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.338108 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.338246 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:06Z","lastTransitionTime":"2026-02-17T17:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.440495 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.440855 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.441045 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.441194 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.441351 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:06Z","lastTransitionTime":"2026-02-17T17:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.543504 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.543542 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.543552 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.543566 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.543575 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:06Z","lastTransitionTime":"2026-02-17T17:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.645757 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.645808 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.645827 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.645850 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.645866 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:06Z","lastTransitionTime":"2026-02-17T17:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.748616 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.748651 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.748663 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.748679 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.748690 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:06Z","lastTransitionTime":"2026-02-17T17:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.851763 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.852035 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.852140 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.852227 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.852307 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:06Z","lastTransitionTime":"2026-02-17T17:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.954518 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.954570 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.954582 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.954600 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:06 crc kubenswrapper[4680]: I0217 17:46:06.954613 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:06Z","lastTransitionTime":"2026-02-17T17:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.057294 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.057345 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.057354 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.057367 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.057378 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:07Z","lastTransitionTime":"2026-02-17T17:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.105394 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.105401 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:46:07 crc kubenswrapper[4680]: E0217 17:46:07.105548 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:46:07 crc kubenswrapper[4680]: E0217 17:46:07.105651 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.105419 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:46:07 crc kubenswrapper[4680]: E0217 17:46:07.105729 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.105869 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:07 crc kubenswrapper[4680]: E0217 17:46:07.105926 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.109994 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 16:43:22.354135661 +0000 UTC Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.159051 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.159082 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.159089 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.159102 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.159113 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:07Z","lastTransitionTime":"2026-02-17T17:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.261216 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.261261 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.261272 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.261285 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.261294 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:07Z","lastTransitionTime":"2026-02-17T17:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.363709 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.363748 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.363757 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.363779 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.363789 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:07Z","lastTransitionTime":"2026-02-17T17:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.466524 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.466557 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.466569 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.466586 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.466597 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:07Z","lastTransitionTime":"2026-02-17T17:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.568031 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.568452 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.568538 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.568646 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.568729 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:07Z","lastTransitionTime":"2026-02-17T17:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.671110 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.671151 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.671162 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.671185 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.671198 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:07Z","lastTransitionTime":"2026-02-17T17:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.773405 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.773453 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.773466 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.773482 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.773494 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:07Z","lastTransitionTime":"2026-02-17T17:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.875893 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.875930 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.875939 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.875952 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.875963 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:07Z","lastTransitionTime":"2026-02-17T17:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.978521 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.978567 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.978576 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.978590 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:07 crc kubenswrapper[4680]: I0217 17:46:07.978599 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:07Z","lastTransitionTime":"2026-02-17T17:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.082044 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.082103 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.082118 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.082137 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.082152 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:08Z","lastTransitionTime":"2026-02-17T17:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.110364 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:31:43.555747999 +0000 UTC Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.183843 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.183898 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.183912 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.183929 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.183942 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:08Z","lastTransitionTime":"2026-02-17T17:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.287350 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.287409 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.287430 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.287459 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.287475 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:08Z","lastTransitionTime":"2026-02-17T17:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.390526 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.390566 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.390578 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.390595 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.390607 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:08Z","lastTransitionTime":"2026-02-17T17:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.494096 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.494161 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.494182 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.494207 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.494225 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:08Z","lastTransitionTime":"2026-02-17T17:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.596873 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.596937 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.596951 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.596971 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.596987 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:08Z","lastTransitionTime":"2026-02-17T17:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.700118 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.700171 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.700182 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.700200 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.700212 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:08Z","lastTransitionTime":"2026-02-17T17:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.802117 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.802414 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.802512 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.802581 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.802650 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:08Z","lastTransitionTime":"2026-02-17T17:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.902114 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs\") pod \"network-metrics-daemon-psx9f\" (UID: \"1270b10e-8243-4940-b8a1-e0d36a0ef933\") " pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:46:08 crc kubenswrapper[4680]: E0217 17:46:08.902293 4680 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 17:46:08 crc kubenswrapper[4680]: E0217 17:46:08.902418 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs podName:1270b10e-8243-4940-b8a1-e0d36a0ef933 nodeName:}" failed. No retries permitted until 2026-02-17 17:47:12.902388209 +0000 UTC m=+162.453286908 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs") pod "network-metrics-daemon-psx9f" (UID: "1270b10e-8243-4940-b8a1-e0d36a0ef933") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.905609 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.905799 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.905904 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.905977 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:08 crc kubenswrapper[4680]: I0217 17:46:08.906037 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:08Z","lastTransitionTime":"2026-02-17T17:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.008456 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.008487 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.008496 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.008507 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.008516 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:09Z","lastTransitionTime":"2026-02-17T17:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.104803 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.104823 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.105014 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:09 crc kubenswrapper[4680]: E0217 17:46:09.105250 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:46:09 crc kubenswrapper[4680]: E0217 17:46:09.105418 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:46:09 crc kubenswrapper[4680]: E0217 17:46:09.105538 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.105806 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:46:09 crc kubenswrapper[4680]: E0217 17:46:09.105886 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.110186 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.110421 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.110499 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:02:03.308667312 +0000 UTC Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.110625 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.110988 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.111101 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:09Z","lastTransitionTime":"2026-02-17T17:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.214237 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.214359 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.214387 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.214428 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.214456 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:09Z","lastTransitionTime":"2026-02-17T17:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.316290 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.316356 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.316365 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.316377 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.316386 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:09Z","lastTransitionTime":"2026-02-17T17:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.419526 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.419582 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.419594 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.419616 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.419630 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:09Z","lastTransitionTime":"2026-02-17T17:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.522886 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.522937 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.522949 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.522969 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.522984 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:09Z","lastTransitionTime":"2026-02-17T17:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.625257 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.625295 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.625307 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.625324 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.625349 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:09Z","lastTransitionTime":"2026-02-17T17:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.728631 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.728685 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.728699 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.728719 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.728733 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:09Z","lastTransitionTime":"2026-02-17T17:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.832539 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.832618 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.832636 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.832667 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.832683 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:09Z","lastTransitionTime":"2026-02-17T17:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.935917 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.935986 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.936000 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.936023 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:09 crc kubenswrapper[4680]: I0217 17:46:09.936037 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:09Z","lastTransitionTime":"2026-02-17T17:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.039301 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.039412 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.039442 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.039462 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.039481 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:10Z","lastTransitionTime":"2026-02-17T17:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.111725 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:33:34.821747251 +0000 UTC Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.141862 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.141910 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.141922 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.141939 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.141951 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:10Z","lastTransitionTime":"2026-02-17T17:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.243975 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.244277 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.244399 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.244484 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.244549 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:10Z","lastTransitionTime":"2026-02-17T17:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.347614 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.347647 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.347657 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.347673 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.347685 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:10Z","lastTransitionTime":"2026-02-17T17:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.450525 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.451071 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.451175 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.451263 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.451406 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:10Z","lastTransitionTime":"2026-02-17T17:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.553920 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.553995 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.554006 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.554019 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.554028 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:10Z","lastTransitionTime":"2026-02-17T17:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.656837 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.656872 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.656924 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.656950 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.656960 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:10Z","lastTransitionTime":"2026-02-17T17:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.760199 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.760254 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.760266 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.760284 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.760298 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:10Z","lastTransitionTime":"2026-02-17T17:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.862492 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.862547 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.862556 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.862568 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.862578 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:10Z","lastTransitionTime":"2026-02-17T17:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.965151 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.965188 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.965200 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.965212 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:10 crc kubenswrapper[4680]: I0217 17:46:10.965221 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:10Z","lastTransitionTime":"2026-02-17T17:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.067641 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.067894 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.068081 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.068172 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.068240 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:11Z","lastTransitionTime":"2026-02-17T17:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.104983 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.105052 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.105147 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:46:11 crc kubenswrapper[4680]: E0217 17:46:11.105141 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:46:11 crc kubenswrapper[4680]: E0217 17:46:11.105253 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.105285 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:11 crc kubenswrapper[4680]: E0217 17:46:11.105346 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:46:11 crc kubenswrapper[4680]: E0217 17:46:11.105401 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.112179 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:27:09.99843069 +0000 UTC Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.143334 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=77.143308692 podStartE2EDuration="1m17.143308692s" podCreationTimestamp="2026-02-17 17:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:11.142394173 +0000 UTC m=+100.693292852" watchObservedRunningTime="2026-02-17 17:46:11.143308692 +0000 UTC m=+100.694207371" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.170267 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.170306 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.170338 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.170354 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.170368 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:11Z","lastTransitionTime":"2026-02-17T17:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.170876 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-l5ckv" podStartSLOduration=81.170865338 podStartE2EDuration="1m21.170865338s" podCreationTimestamp="2026-02-17 17:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:11.170114604 +0000 UTC m=+100.721013273" watchObservedRunningTime="2026-02-17 17:46:11.170865338 +0000 UTC m=+100.721764017" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.185814 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podStartSLOduration=80.185799872 podStartE2EDuration="1m20.185799872s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:11.185339578 +0000 UTC m=+100.736238257" watchObservedRunningTime="2026-02-17 17:46:11.185799872 +0000 UTC m=+100.736698551" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.201251 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-sdjhh" podStartSLOduration=80.201232661 podStartE2EDuration="1m20.201232661s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:11.200111977 +0000 UTC m=+100.751010656" watchObservedRunningTime="2026-02-17 17:46:11.201232661 +0000 UTC m=+100.752131330" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.258013 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=78.257993604 podStartE2EDuration="1m18.257993604s" podCreationTimestamp="2026-02-17 17:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:11.240900913 +0000 UTC m=+100.791799612" watchObservedRunningTime="2026-02-17 17:46:11.257993604 +0000 UTC m=+100.808892283" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.258322 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.258317205 podStartE2EDuration="48.258317205s" podCreationTimestamp="2026-02-17 17:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:11.257274092 +0000 UTC m=+100.808172781" watchObservedRunningTime="2026-02-17 17:46:11.258317205 +0000 UTC m=+100.809215884" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.272208 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.272547 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.272650 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.272758 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.272863 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:11Z","lastTransitionTime":"2026-02-17T17:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.300395 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-knlwb" podStartSLOduration=80.300376391 podStartE2EDuration="1m20.300376391s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:11.286640205 +0000 UTC m=+100.837538884" watchObservedRunningTime="2026-02-17 17:46:11.300376391 +0000 UTC m=+100.851275070" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.313203 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=35.313183869 podStartE2EDuration="35.313183869s" podCreationTimestamp="2026-02-17 17:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:11.312681063 +0000 UTC m=+100.863579742" watchObservedRunningTime="2026-02-17 17:46:11.313183869 +0000 UTC m=+100.864082548" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.362076 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xpqrx" podStartSLOduration=80.362042327 podStartE2EDuration="1m20.362042327s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:11.349236828 +0000 UTC m=+100.900135507" watchObservedRunningTime="2026-02-17 17:46:11.362042327 +0000 UTC m=+100.912941006" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.375311 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.375356 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.375366 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.375379 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.375388 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:11Z","lastTransitionTime":"2026-02-17T17:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.400061 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.400042507 podStartE2EDuration="1m21.400042507s" podCreationTimestamp="2026-02-17 17:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:11.399434918 +0000 UTC m=+100.950333617" watchObservedRunningTime="2026-02-17 17:46:11.400042507 +0000 UTC m=+100.950941186" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.400461 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-t2c5v" podStartSLOduration=80.40045709 podStartE2EDuration="1m20.40045709s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:11.379187939 +0000 UTC m=+100.930086618" watchObservedRunningTime="2026-02-17 17:46:11.40045709 +0000 UTC m=+100.951355769" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.477431 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.477474 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.477487 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.477502 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.477515 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:11Z","lastTransitionTime":"2026-02-17T17:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.590804 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.590831 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.590839 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.590851 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.590860 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:11Z","lastTransitionTime":"2026-02-17T17:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.693306 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.693356 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.693365 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.693377 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.693386 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:11Z","lastTransitionTime":"2026-02-17T17:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.794964 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.794998 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.795009 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.795024 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.795036 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:11Z","lastTransitionTime":"2026-02-17T17:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.896653 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.896699 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.896709 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.896723 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.896733 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:11Z","lastTransitionTime":"2026-02-17T17:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.999580 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.999873 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:11 crc kubenswrapper[4680]: I0217 17:46:11.999938 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.000062 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.000148 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:12Z","lastTransitionTime":"2026-02-17T17:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.102705 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.102769 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.102781 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.102802 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.102818 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:12Z","lastTransitionTime":"2026-02-17T17:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.105878 4680 scope.go:117] "RemoveContainer" containerID="c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af" Feb 17 17:46:12 crc kubenswrapper[4680]: E0217 17:46:12.106088 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8vx4n_openshift-ovn-kubernetes(2733b1c1-6f98-4184-b59b-123435eeb569)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.113733 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:19:02.518523167 +0000 UTC Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.205289 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.205376 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.205393 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.205418 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.205435 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:12Z","lastTransitionTime":"2026-02-17T17:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.308269 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.308305 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.308316 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.308347 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.308356 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:12Z","lastTransitionTime":"2026-02-17T17:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.410445 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.410479 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.410495 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.410511 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.410523 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:12Z","lastTransitionTime":"2026-02-17T17:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.512425 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.512451 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.512460 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.512494 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.512503 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:12Z","lastTransitionTime":"2026-02-17T17:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.533954 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.533989 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.534000 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.534015 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.534026 4680 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T17:46:12Z","lastTransitionTime":"2026-02-17T17:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.580304 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh"] Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.580739 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.582724 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.584896 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.584979 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.585020 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.642538 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cf0a9569-4801-418b-94c5-86a8c684a166-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-c49mh\" (UID: \"cf0a9569-4801-418b-94c5-86a8c684a166\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.642592 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf0a9569-4801-418b-94c5-86a8c684a166-service-ca\") pod \"cluster-version-operator-5c965bbfc6-c49mh\" (UID: \"cf0a9569-4801-418b-94c5-86a8c684a166\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.642609 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf0a9569-4801-418b-94c5-86a8c684a166-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-c49mh\" (UID: \"cf0a9569-4801-418b-94c5-86a8c684a166\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.642745 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cf0a9569-4801-418b-94c5-86a8c684a166-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-c49mh\" (UID: \"cf0a9569-4801-418b-94c5-86a8c684a166\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.642779 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0a9569-4801-418b-94c5-86a8c684a166-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-c49mh\" (UID: \"cf0a9569-4801-418b-94c5-86a8c684a166\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.743538 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf0a9569-4801-418b-94c5-86a8c684a166-service-ca\") pod \"cluster-version-operator-5c965bbfc6-c49mh\" (UID: \"cf0a9569-4801-418b-94c5-86a8c684a166\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.743572 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf0a9569-4801-418b-94c5-86a8c684a166-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-c49mh\" (UID: \"cf0a9569-4801-418b-94c5-86a8c684a166\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.743616 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cf0a9569-4801-418b-94c5-86a8c684a166-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-c49mh\" (UID: \"cf0a9569-4801-418b-94c5-86a8c684a166\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.743631 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0a9569-4801-418b-94c5-86a8c684a166-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-c49mh\" (UID: \"cf0a9569-4801-418b-94c5-86a8c684a166\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.743669 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cf0a9569-4801-418b-94c5-86a8c684a166-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-c49mh\" (UID: \"cf0a9569-4801-418b-94c5-86a8c684a166\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.743728 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cf0a9569-4801-418b-94c5-86a8c684a166-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-c49mh\" (UID: \"cf0a9569-4801-418b-94c5-86a8c684a166\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.744378 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cf0a9569-4801-418b-94c5-86a8c684a166-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-c49mh\" (UID: \"cf0a9569-4801-418b-94c5-86a8c684a166\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.744548 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf0a9569-4801-418b-94c5-86a8c684a166-service-ca\") pod \"cluster-version-operator-5c965bbfc6-c49mh\" (UID: \"cf0a9569-4801-418b-94c5-86a8c684a166\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.752007 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0a9569-4801-418b-94c5-86a8c684a166-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-c49mh\" (UID: \"cf0a9569-4801-418b-94c5-86a8c684a166\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.759541 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf0a9569-4801-418b-94c5-86a8c684a166-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-c49mh\" (UID: \"cf0a9569-4801-418b-94c5-86a8c684a166\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" Feb 17 17:46:12 crc kubenswrapper[4680]: I0217 17:46:12.902143 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" Feb 17 17:46:12 crc kubenswrapper[4680]: W0217 17:46:12.914644 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf0a9569_4801_418b_94c5_86a8c684a166.slice/crio-a476c5b26a932e21bf6a2e37817c15b252daab906fe9c2814d2e7f4213e1f844 WatchSource:0}: Error finding container a476c5b26a932e21bf6a2e37817c15b252daab906fe9c2814d2e7f4213e1f844: Status 404 returned error can't find the container with id a476c5b26a932e21bf6a2e37817c15b252daab906fe9c2814d2e7f4213e1f844 Feb 17 17:46:13 crc kubenswrapper[4680]: I0217 17:46:13.105084 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:46:13 crc kubenswrapper[4680]: I0217 17:46:13.105142 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:46:13 crc kubenswrapper[4680]: E0217 17:46:13.105246 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:46:13 crc kubenswrapper[4680]: I0217 17:46:13.105310 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:13 crc kubenswrapper[4680]: I0217 17:46:13.105351 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:46:13 crc kubenswrapper[4680]: E0217 17:46:13.105454 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:46:13 crc kubenswrapper[4680]: E0217 17:46:13.105508 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:46:13 crc kubenswrapper[4680]: E0217 17:46:13.105577 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:46:13 crc kubenswrapper[4680]: I0217 17:46:13.114750 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 04:21:56.700239674 +0000 UTC Feb 17 17:46:13 crc kubenswrapper[4680]: I0217 17:46:13.114789 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 17 17:46:13 crc kubenswrapper[4680]: I0217 17:46:13.121894 4680 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 17:46:13 crc kubenswrapper[4680]: I0217 17:46:13.597597 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" event={"ID":"cf0a9569-4801-418b-94c5-86a8c684a166","Type":"ContainerStarted","Data":"7e792b92b125c85c539a2cf87e84658b5f9a65700e92c2496972017110c58ebf"} Feb 17 17:46:13 crc kubenswrapper[4680]: I0217 17:46:13.597641 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" event={"ID":"cf0a9569-4801-418b-94c5-86a8c684a166","Type":"ContainerStarted","Data":"a476c5b26a932e21bf6a2e37817c15b252daab906fe9c2814d2e7f4213e1f844"} Feb 17 17:46:13 crc kubenswrapper[4680]: I0217 17:46:13.609860 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c49mh" podStartSLOduration=82.609845341 podStartE2EDuration="1m22.609845341s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:13.60917106 +0000 UTC m=+103.160069739" watchObservedRunningTime="2026-02-17 17:46:13.609845341 +0000 UTC m=+103.160744020" Feb 17 17:46:15 crc kubenswrapper[4680]: I0217 17:46:15.105203 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:46:15 crc kubenswrapper[4680]: E0217 17:46:15.105339 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:46:15 crc kubenswrapper[4680]: I0217 17:46:15.105771 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:15 crc kubenswrapper[4680]: I0217 17:46:15.105788 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:46:15 crc kubenswrapper[4680]: E0217 17:46:15.105971 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:46:15 crc kubenswrapper[4680]: I0217 17:46:15.106061 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:46:15 crc kubenswrapper[4680]: E0217 17:46:15.106201 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:46:15 crc kubenswrapper[4680]: E0217 17:46:15.106294 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:46:17 crc kubenswrapper[4680]: I0217 17:46:17.104947 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:46:17 crc kubenswrapper[4680]: I0217 17:46:17.105004 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:46:17 crc kubenswrapper[4680]: I0217 17:46:17.105052 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:46:17 crc kubenswrapper[4680]: I0217 17:46:17.105095 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:17 crc kubenswrapper[4680]: E0217 17:46:17.105116 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:46:17 crc kubenswrapper[4680]: E0217 17:46:17.105209 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:46:17 crc kubenswrapper[4680]: E0217 17:46:17.105283 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:46:17 crc kubenswrapper[4680]: E0217 17:46:17.105399 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:46:19 crc kubenswrapper[4680]: I0217 17:46:19.105248 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:19 crc kubenswrapper[4680]: E0217 17:46:19.105402 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:46:19 crc kubenswrapper[4680]: I0217 17:46:19.105551 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:46:19 crc kubenswrapper[4680]: I0217 17:46:19.105593 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:46:19 crc kubenswrapper[4680]: I0217 17:46:19.105601 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:46:19 crc kubenswrapper[4680]: E0217 17:46:19.105689 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:46:19 crc kubenswrapper[4680]: E0217 17:46:19.105745 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:46:19 crc kubenswrapper[4680]: E0217 17:46:19.105841 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:46:21 crc kubenswrapper[4680]: I0217 17:46:21.107221 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:21 crc kubenswrapper[4680]: I0217 17:46:21.108295 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:46:21 crc kubenswrapper[4680]: I0217 17:46:21.108395 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:46:21 crc kubenswrapper[4680]: I0217 17:46:21.108374 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:46:21 crc kubenswrapper[4680]: E0217 17:46:21.108286 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:46:21 crc kubenswrapper[4680]: E0217 17:46:21.108653 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:46:21 crc kubenswrapper[4680]: E0217 17:46:21.108757 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:46:21 crc kubenswrapper[4680]: E0217 17:46:21.108847 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:46:23 crc kubenswrapper[4680]: I0217 17:46:23.104922 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:46:23 crc kubenswrapper[4680]: I0217 17:46:23.105047 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:46:23 crc kubenswrapper[4680]: E0217 17:46:23.105122 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:46:23 crc kubenswrapper[4680]: I0217 17:46:23.105613 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:23 crc kubenswrapper[4680]: I0217 17:46:23.105687 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:46:23 crc kubenswrapper[4680]: E0217 17:46:23.105875 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:46:23 crc kubenswrapper[4680]: E0217 17:46:23.105994 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:46:23 crc kubenswrapper[4680]: E0217 17:46:23.106074 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:46:23 crc kubenswrapper[4680]: I0217 17:46:23.106364 4680 scope.go:117] "RemoveContainer" containerID="c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af" Feb 17 17:46:23 crc kubenswrapper[4680]: I0217 17:46:23.627838 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovnkube-controller/3.log" Feb 17 17:46:23 crc kubenswrapper[4680]: I0217 17:46:23.632651 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerStarted","Data":"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe"} Feb 17 17:46:23 crc kubenswrapper[4680]: I0217 17:46:23.633551 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:46:24 crc kubenswrapper[4680]: I0217 17:46:24.034631 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podStartSLOduration=93.034611702 podStartE2EDuration="1m33.034611702s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:23.666816427 +0000 UTC m=+113.217715096" watchObservedRunningTime="2026-02-17 17:46:24.034611702 +0000 UTC m=+113.585510381" Feb 17 17:46:24 crc kubenswrapper[4680]: I0217 17:46:24.035177 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-psx9f"] Feb 17 17:46:24 crc kubenswrapper[4680]: I0217 17:46:24.035273 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:46:24 crc kubenswrapper[4680]: E0217 17:46:24.035396 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:46:25 crc kubenswrapper[4680]: I0217 17:46:25.104940 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:46:25 crc kubenswrapper[4680]: I0217 17:46:25.104995 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:25 crc kubenswrapper[4680]: E0217 17:46:25.105360 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 17:46:25 crc kubenswrapper[4680]: I0217 17:46:25.104996 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:46:25 crc kubenswrapper[4680]: E0217 17:46:25.105539 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 17:46:25 crc kubenswrapper[4680]: E0217 17:46:25.105678 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.104837 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:46:26 crc kubenswrapper[4680]: E0217 17:46:26.105050 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-psx9f" podUID="1270b10e-8243-4940-b8a1-e0d36a0ef933" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.464544 4680 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.464712 4680 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.506615 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dp7b9"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.521280 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.533066 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.536424 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.536499 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.536723 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.536844 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.537797 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.555573 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.557519 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.560077 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.561390 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vhvpq"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.561987 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.565197 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5zzk5"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.565601 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.566001 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.566472 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.567912 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.568029 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.568063 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.568107 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.568188 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.568232 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.568271 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.568383 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.568634 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.568911 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.570663 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbrt9"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.571234 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbrt9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.571548 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qxkds"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.571929 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.572487 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xmlr4"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.572858 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.579743 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.579955 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.580075 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.580306 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.580394 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.580440 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.580554 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.580683 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.580790 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.580861 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.581061 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.583857 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.584030 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.584119 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.584152 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2l4fg"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.584642 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2l4fg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.585239 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.585844 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.589819 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.591526 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.591674 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.592099 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.592214 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.592363 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.593167 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.593421 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.593689 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.593848 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.593927 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.594176 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.594359 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.594608 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.594712 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.594798 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.594894 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.594982 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.597062 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.600590 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-m96mg"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.601136 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-z9fcm"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.601474 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-z9fcm" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.601474 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-m96mg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.603772 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.603922 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.608799 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.608903 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.608946 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.609032 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.609224 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.609336 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.609457 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.609482 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.609531 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.609619 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.609484 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.609675 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.609686 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.609722 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.609940 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.610032 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.611639 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.611775 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.612884 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.613120 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.626285 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.632746 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.634154 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.646015 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.646733 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z8mj5"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.647018 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.648096 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.648480 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xpf6f"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.648748 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.648981 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.649015 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.651578 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.652030 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.652955 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.662022 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kpzts"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.662462 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kpzts" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.662814 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5c84c"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.666978 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.667180 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.667272 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.667306 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.667364 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.667520 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.672186 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.672806 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.674731 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.675211 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.677439 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5c84c" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.677741 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-t9gdn"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.678131 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.678168 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.680456 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.683434 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.683739 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.684102 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.684284 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.684447 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.684602 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.684778 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.685302 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.685483 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.685652 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.685854 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.685999 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.686908 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.687144 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688013 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-serving-cert\") pod \"controller-manager-879f6c89f-5zzk5\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688044 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwsmj\" (UniqueName: \"kubernetes.io/projected/2612e265-77ef-4e49-ba43-dddb142b0856-kube-api-access-dwsmj\") pod \"machine-approver-56656f9798-ffq2r\" (UID: \"2612e265-77ef-4e49-ba43-dddb142b0856\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688081 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dk2b\" (UniqueName: \"kubernetes.io/projected/13cf25e6-413f-472e-8178-439df59c10d7-kube-api-access-5dk2b\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688102 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7dac5a2-b81a-483f-9686-a38012959621-config\") pod \"authentication-operator-69f744f599-xmlr4\" (UID: \"d7dac5a2-b81a-483f-9686-a38012959621\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688118 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sccxt\" (UniqueName: \"kubernetes.io/projected/3abf57a9-2bd6-4f7f-ba75-5fe928201df8-kube-api-access-sccxt\") pod \"machine-api-operator-5694c8668f-vhvpq\" (UID: \"3abf57a9-2bd6-4f7f-ba75-5fe928201df8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688136 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13cf25e6-413f-472e-8178-439df59c10d7-etcd-serving-ca\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688178 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad93e803-b1ee-4401-a6c4-0f510b6b8735-serving-cert\") pod \"openshift-config-operator-7777fb866f-f2h2c\" (UID: \"ad93e803-b1ee-4401-a6c4-0f510b6b8735\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688198 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13cf25e6-413f-472e-8178-439df59c10d7-encryption-config\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688230 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2612e265-77ef-4e49-ba43-dddb142b0856-config\") pod \"machine-approver-56656f9798-ffq2r\" (UID: \"2612e265-77ef-4e49-ba43-dddb142b0856\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688257 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-encryption-config\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688282 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7dac5a2-b81a-483f-9686-a38012959621-service-ca-bundle\") pod \"authentication-operator-69f744f599-xmlr4\" (UID: \"d7dac5a2-b81a-483f-9686-a38012959621\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688297 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13cf25e6-413f-472e-8178-439df59c10d7-serving-cert\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688312 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cb58\" (UniqueName: \"kubernetes.io/projected/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-kube-api-access-8cb58\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688351 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2612e265-77ef-4e49-ba43-dddb142b0856-auth-proxy-config\") pod \"machine-approver-56656f9798-ffq2r\" (UID: \"2612e265-77ef-4e49-ba43-dddb142b0856\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688372 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3abf57a9-2bd6-4f7f-ba75-5fe928201df8-images\") pod \"machine-api-operator-5694c8668f-vhvpq\" (UID: \"3abf57a9-2bd6-4f7f-ba75-5fe928201df8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688390 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b1c48715-1326-41fd-8b2e-007578010c80-audit-dir\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688405 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688423 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7dac5a2-b81a-483f-9686-a38012959621-serving-cert\") pod \"authentication-operator-69f744f599-xmlr4\" (UID: \"d7dac5a2-b81a-483f-9686-a38012959621\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688440 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-audit-policies\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688454 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-client-ca\") pod \"controller-manager-879f6c89f-5zzk5\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688470 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13cf25e6-413f-472e-8178-439df59c10d7-etcd-client\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688485 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-audit-policies\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688500 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688519 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688535 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688553 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-audit-dir\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688568 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688585 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4ns2\" (UniqueName: \"kubernetes.io/projected/8e87e04c-2d08-4f13-8b5a-db27f849132e-kube-api-access-b4ns2\") pod \"downloads-7954f5f757-z9fcm\" (UID: \"8e87e04c-2d08-4f13-8b5a-db27f849132e\") " pod="openshift-console/downloads-7954f5f757-z9fcm" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688601 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szhmv\" (UniqueName: \"kubernetes.io/projected/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-kube-api-access-szhmv\") pod \"controller-manager-879f6c89f-5zzk5\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688616 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688632 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688647 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ad93e803-b1ee-4401-a6c4-0f510b6b8735-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f2h2c\" (UID: \"ad93e803-b1ee-4401-a6c4-0f510b6b8735\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688661 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-config\") pod \"controller-manager-879f6c89f-5zzk5\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688678 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0da17dc9-f5b1-4877-8242-28d767846f40-metrics-tls\") pod \"dns-operator-744455d44c-m96mg\" (UID: \"0da17dc9-f5b1-4877-8242-28d767846f40\") " pod="openshift-dns-operator/dns-operator-744455d44c-m96mg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688693 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5zzk5\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688711 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3abf57a9-2bd6-4f7f-ba75-5fe928201df8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vhvpq\" (UID: \"3abf57a9-2bd6-4f7f-ba75-5fe928201df8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688726 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688742 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c547720-450f-4714-827e-7694bd2b89f7-config\") pod \"console-operator-58897d9998-2l4fg\" (UID: \"8c547720-450f-4714-827e-7694bd2b89f7\") " pod="openshift-console-operator/console-operator-58897d9998-2l4fg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688755 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2612e265-77ef-4e49-ba43-dddb142b0856-machine-approver-tls\") pod \"machine-approver-56656f9798-ffq2r\" (UID: \"2612e265-77ef-4e49-ba43-dddb142b0856\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688771 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688785 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688803 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjchx\" (UniqueName: \"kubernetes.io/projected/0da17dc9-f5b1-4877-8242-28d767846f40-kube-api-access-bjchx\") pod \"dns-operator-744455d44c-m96mg\" (UID: \"0da17dc9-f5b1-4877-8242-28d767846f40\") " pod="openshift-dns-operator/dns-operator-744455d44c-m96mg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688817 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13cf25e6-413f-472e-8178-439df59c10d7-audit-dir\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688831 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/13cf25e6-413f-472e-8178-439df59c10d7-node-pullsecrets\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688850 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13cf25e6-413f-472e-8178-439df59c10d7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688864 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/13cf25e6-413f-472e-8178-439df59c10d7-audit\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688879 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688895 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688910 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-565qs\" (UniqueName: \"kubernetes.io/projected/ae5ad2a7-30fa-4e6c-9cb7-273f3d260318-kube-api-access-565qs\") pod \"openshift-apiserver-operator-796bbdcf4f-jbrt9\" (UID: \"ae5ad2a7-30fa-4e6c-9cb7-273f3d260318\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbrt9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688924 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c547720-450f-4714-827e-7694bd2b89f7-serving-cert\") pod \"console-operator-58897d9998-2l4fg\" (UID: \"8c547720-450f-4714-827e-7694bd2b89f7\") " pod="openshift-console-operator/console-operator-58897d9998-2l4fg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688937 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fq5k\" (UniqueName: \"kubernetes.io/projected/d7dac5a2-b81a-483f-9686-a38012959621-kube-api-access-4fq5k\") pod \"authentication-operator-69f744f599-xmlr4\" (UID: \"d7dac5a2-b81a-483f-9686-a38012959621\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688951 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3abf57a9-2bd6-4f7f-ba75-5fe928201df8-config\") pod \"machine-api-operator-5694c8668f-vhvpq\" (UID: \"3abf57a9-2bd6-4f7f-ba75-5fe928201df8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688965 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-etcd-client\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688979 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.688993 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8c547720-450f-4714-827e-7694bd2b89f7-trusted-ca\") pod \"console-operator-58897d9998-2l4fg\" (UID: \"8c547720-450f-4714-827e-7694bd2b89f7\") " pod="openshift-console-operator/console-operator-58897d9998-2l4fg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.689008 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/13cf25e6-413f-472e-8178-439df59c10d7-image-import-ca\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.689023 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13cf25e6-413f-472e-8178-439df59c10d7-config\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.689041 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cswfz\" (UniqueName: \"kubernetes.io/projected/ad93e803-b1ee-4401-a6c4-0f510b6b8735-kube-api-access-cswfz\") pod \"openshift-config-operator-7777fb866f-f2h2c\" (UID: \"ad93e803-b1ee-4401-a6c4-0f510b6b8735\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.689055 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-serving-cert\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.689069 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7dac5a2-b81a-483f-9686-a38012959621-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xmlr4\" (UID: \"d7dac5a2-b81a-483f-9686-a38012959621\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.689092 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5ad2a7-30fa-4e6c-9cb7-273f3d260318-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jbrt9\" (UID: \"ae5ad2a7-30fa-4e6c-9cb7-273f3d260318\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbrt9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.689105 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5ad2a7-30fa-4e6c-9cb7-273f3d260318-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jbrt9\" (UID: \"ae5ad2a7-30fa-4e6c-9cb7-273f3d260318\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbrt9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.689121 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hmtx\" (UniqueName: \"kubernetes.io/projected/b1c48715-1326-41fd-8b2e-007578010c80-kube-api-access-8hmtx\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.689135 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h27fz\" (UniqueName: \"kubernetes.io/projected/8c547720-450f-4714-827e-7694bd2b89f7-kube-api-access-h27fz\") pod \"console-operator-58897d9998-2l4fg\" (UID: \"8c547720-450f-4714-827e-7694bd2b89f7\") " pod="openshift-console-operator/console-operator-58897d9998-2l4fg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.689778 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.690074 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.690121 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.692489 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-nkxs4"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.693000 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.701010 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.713740 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.715132 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.727858 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv2j"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.730191 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.736207 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.736464 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.736752 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s9v7z"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.737173 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s9v7z" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.737443 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.736771 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.738039 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv2j" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.738223 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rjbs8"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.738627 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.738960 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hqvn6"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.739431 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqvn6" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.741839 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.742479 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khf8m"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.742860 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-rptfq"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.743380 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-rptfq" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.743450 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.743743 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khf8m" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.743845 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.744584 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gd8pp"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.744836 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.745190 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gd8pp" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.745910 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lsp6v"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.746374 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lsp6v" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.747713 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c462h"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.748363 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c462h" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.751515 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.752304 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.753060 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.758458 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rbdrt"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.760532 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fmz28"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.761432 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rbdrt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.766704 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5zzk5"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.766826 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fmz28" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.766916 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.768642 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbrt9"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.770366 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.770921 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.776071 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qxkds"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.783553 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dp7b9"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.783881 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.788463 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2l4fg"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790014 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xmlr4"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790417 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b1c48715-1326-41fd-8b2e-007578010c80-audit-dir\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790442 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790459 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7dac5a2-b81a-483f-9686-a38012959621-serving-cert\") pod \"authentication-operator-69f744f599-xmlr4\" (UID: \"d7dac5a2-b81a-483f-9686-a38012959621\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790476 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-audit-policies\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790490 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-client-ca\") pod \"controller-manager-879f6c89f-5zzk5\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790530 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13cf25e6-413f-472e-8178-439df59c10d7-etcd-client\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790547 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-audit-policies\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790563 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790578 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790593 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790607 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-audit-dir\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790624 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790638 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4ns2\" (UniqueName: \"kubernetes.io/projected/8e87e04c-2d08-4f13-8b5a-db27f849132e-kube-api-access-b4ns2\") pod \"downloads-7954f5f757-z9fcm\" (UID: \"8e87e04c-2d08-4f13-8b5a-db27f849132e\") " pod="openshift-console/downloads-7954f5f757-z9fcm" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790655 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szhmv\" (UniqueName: \"kubernetes.io/projected/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-kube-api-access-szhmv\") pod \"controller-manager-879f6c89f-5zzk5\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790669 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790686 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790703 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ad93e803-b1ee-4401-a6c4-0f510b6b8735-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f2h2c\" (UID: \"ad93e803-b1ee-4401-a6c4-0f510b6b8735\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790720 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-config\") pod \"controller-manager-879f6c89f-5zzk5\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790738 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0da17dc9-f5b1-4877-8242-28d767846f40-metrics-tls\") pod \"dns-operator-744455d44c-m96mg\" (UID: \"0da17dc9-f5b1-4877-8242-28d767846f40\") " pod="openshift-dns-operator/dns-operator-744455d44c-m96mg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790753 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5zzk5\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790769 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3abf57a9-2bd6-4f7f-ba75-5fe928201df8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vhvpq\" (UID: \"3abf57a9-2bd6-4f7f-ba75-5fe928201df8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790785 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790801 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c547720-450f-4714-827e-7694bd2b89f7-config\") pod \"console-operator-58897d9998-2l4fg\" (UID: \"8c547720-450f-4714-827e-7694bd2b89f7\") " pod="openshift-console-operator/console-operator-58897d9998-2l4fg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790815 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2612e265-77ef-4e49-ba43-dddb142b0856-machine-approver-tls\") pod \"machine-approver-56656f9798-ffq2r\" (UID: \"2612e265-77ef-4e49-ba43-dddb142b0856\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790830 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790847 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790863 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjchx\" (UniqueName: \"kubernetes.io/projected/0da17dc9-f5b1-4877-8242-28d767846f40-kube-api-access-bjchx\") pod \"dns-operator-744455d44c-m96mg\" (UID: \"0da17dc9-f5b1-4877-8242-28d767846f40\") " pod="openshift-dns-operator/dns-operator-744455d44c-m96mg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790881 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13cf25e6-413f-472e-8178-439df59c10d7-audit-dir\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790924 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/13cf25e6-413f-472e-8178-439df59c10d7-node-pullsecrets\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790940 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/13cf25e6-413f-472e-8178-439df59c10d7-audit\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790958 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13cf25e6-413f-472e-8178-439df59c10d7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790976 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.790990 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-etcd-client\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791007 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791026 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-565qs\" (UniqueName: \"kubernetes.io/projected/ae5ad2a7-30fa-4e6c-9cb7-273f3d260318-kube-api-access-565qs\") pod \"openshift-apiserver-operator-796bbdcf4f-jbrt9\" (UID: \"ae5ad2a7-30fa-4e6c-9cb7-273f3d260318\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbrt9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791057 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c547720-450f-4714-827e-7694bd2b89f7-serving-cert\") pod \"console-operator-58897d9998-2l4fg\" (UID: \"8c547720-450f-4714-827e-7694bd2b89f7\") " pod="openshift-console-operator/console-operator-58897d9998-2l4fg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791072 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fq5k\" (UniqueName: \"kubernetes.io/projected/d7dac5a2-b81a-483f-9686-a38012959621-kube-api-access-4fq5k\") pod \"authentication-operator-69f744f599-xmlr4\" (UID: \"d7dac5a2-b81a-483f-9686-a38012959621\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791087 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3abf57a9-2bd6-4f7f-ba75-5fe928201df8-config\") pod \"machine-api-operator-5694c8668f-vhvpq\" (UID: \"3abf57a9-2bd6-4f7f-ba75-5fe928201df8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791103 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791117 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8c547720-450f-4714-827e-7694bd2b89f7-trusted-ca\") pod \"console-operator-58897d9998-2l4fg\" (UID: \"8c547720-450f-4714-827e-7694bd2b89f7\") " pod="openshift-console-operator/console-operator-58897d9998-2l4fg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791133 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/13cf25e6-413f-472e-8178-439df59c10d7-image-import-ca\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791148 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13cf25e6-413f-472e-8178-439df59c10d7-config\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791171 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cswfz\" (UniqueName: \"kubernetes.io/projected/ad93e803-b1ee-4401-a6c4-0f510b6b8735-kube-api-access-cswfz\") pod \"openshift-config-operator-7777fb866f-f2h2c\" (UID: \"ad93e803-b1ee-4401-a6c4-0f510b6b8735\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791187 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-serving-cert\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791202 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7dac5a2-b81a-483f-9686-a38012959621-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xmlr4\" (UID: \"d7dac5a2-b81a-483f-9686-a38012959621\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791219 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5ad2a7-30fa-4e6c-9cb7-273f3d260318-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jbrt9\" (UID: \"ae5ad2a7-30fa-4e6c-9cb7-273f3d260318\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbrt9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791227 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-z9fcm"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791236 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5ad2a7-30fa-4e6c-9cb7-273f3d260318-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jbrt9\" (UID: \"ae5ad2a7-30fa-4e6c-9cb7-273f3d260318\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbrt9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791317 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hmtx\" (UniqueName: \"kubernetes.io/projected/b1c48715-1326-41fd-8b2e-007578010c80-kube-api-access-8hmtx\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791415 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h27fz\" (UniqueName: \"kubernetes.io/projected/8c547720-450f-4714-827e-7694bd2b89f7-kube-api-access-h27fz\") pod \"console-operator-58897d9998-2l4fg\" (UID: \"8c547720-450f-4714-827e-7694bd2b89f7\") " pod="openshift-console-operator/console-operator-58897d9998-2l4fg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791452 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-serving-cert\") pod \"controller-manager-879f6c89f-5zzk5\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791476 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwsmj\" (UniqueName: \"kubernetes.io/projected/2612e265-77ef-4e49-ba43-dddb142b0856-kube-api-access-dwsmj\") pod \"machine-approver-56656f9798-ffq2r\" (UID: \"2612e265-77ef-4e49-ba43-dddb142b0856\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791523 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7dac5a2-b81a-483f-9686-a38012959621-config\") pod \"authentication-operator-69f744f599-xmlr4\" (UID: \"d7dac5a2-b81a-483f-9686-a38012959621\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791546 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dk2b\" (UniqueName: \"kubernetes.io/projected/13cf25e6-413f-472e-8178-439df59c10d7-kube-api-access-5dk2b\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791663 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sccxt\" (UniqueName: \"kubernetes.io/projected/3abf57a9-2bd6-4f7f-ba75-5fe928201df8-kube-api-access-sccxt\") pod \"machine-api-operator-5694c8668f-vhvpq\" (UID: \"3abf57a9-2bd6-4f7f-ba75-5fe928201df8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.791956 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5ad2a7-30fa-4e6c-9cb7-273f3d260318-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jbrt9\" (UID: \"ae5ad2a7-30fa-4e6c-9cb7-273f3d260318\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbrt9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.792157 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13cf25e6-413f-472e-8178-439df59c10d7-etcd-serving-ca\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.792194 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad93e803-b1ee-4401-a6c4-0f510b6b8735-serving-cert\") pod \"openshift-config-operator-7777fb866f-f2h2c\" (UID: \"ad93e803-b1ee-4401-a6c4-0f510b6b8735\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.792218 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13cf25e6-413f-472e-8178-439df59c10d7-encryption-config\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.792244 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2612e265-77ef-4e49-ba43-dddb142b0856-config\") pod \"machine-approver-56656f9798-ffq2r\" (UID: \"2612e265-77ef-4e49-ba43-dddb142b0856\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.792269 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-encryption-config\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.792308 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7dac5a2-b81a-483f-9686-a38012959621-service-ca-bundle\") pod \"authentication-operator-69f744f599-xmlr4\" (UID: \"d7dac5a2-b81a-483f-9686-a38012959621\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.792381 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13cf25e6-413f-472e-8178-439df59c10d7-serving-cert\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.792421 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cb58\" (UniqueName: \"kubernetes.io/projected/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-kube-api-access-8cb58\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.792444 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2612e265-77ef-4e49-ba43-dddb142b0856-auth-proxy-config\") pod \"machine-approver-56656f9798-ffq2r\" (UID: \"2612e265-77ef-4e49-ba43-dddb142b0856\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.792470 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3abf57a9-2bd6-4f7f-ba75-5fe928201df8-images\") pod \"machine-api-operator-5694c8668f-vhvpq\" (UID: \"3abf57a9-2bd6-4f7f-ba75-5fe928201df8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.793156 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b1c48715-1326-41fd-8b2e-007578010c80-audit-dir\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.793468 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3abf57a9-2bd6-4f7f-ba75-5fe928201df8-images\") pod \"machine-api-operator-5694c8668f-vhvpq\" (UID: \"3abf57a9-2bd6-4f7f-ba75-5fe928201df8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.794402 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-m96mg"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.796204 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13cf25e6-413f-472e-8178-439df59c10d7-audit-dir\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.796280 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/13cf25e6-413f-472e-8178-439df59c10d7-node-pullsecrets\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.796734 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.796974 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/13cf25e6-413f-472e-8178-439df59c10d7-audit\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.798387 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3abf57a9-2bd6-4f7f-ba75-5fe928201df8-config\") pod \"machine-api-operator-5694c8668f-vhvpq\" (UID: \"3abf57a9-2bd6-4f7f-ba75-5fe928201df8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.800869 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-config\") pod \"controller-manager-879f6c89f-5zzk5\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.801513 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13cf25e6-413f-472e-8178-439df59c10d7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.796763 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-audit-dir\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.801973 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-client-ca\") pod \"controller-manager-879f6c89f-5zzk5\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.802450 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ad93e803-b1ee-4401-a6c4-0f510b6b8735-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f2h2c\" (UID: \"ad93e803-b1ee-4401-a6c4-0f510b6b8735\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.805991 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-audit-policies\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.807010 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.810133 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0da17dc9-f5b1-4877-8242-28d767846f40-metrics-tls\") pod \"dns-operator-744455d44c-m96mg\" (UID: \"0da17dc9-f5b1-4877-8242-28d767846f40\") " pod="openshift-dns-operator/dns-operator-744455d44c-m96mg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.812848 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7dac5a2-b81a-483f-9686-a38012959621-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xmlr4\" (UID: \"d7dac5a2-b81a-483f-9686-a38012959621\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.810838 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7dac5a2-b81a-483f-9686-a38012959621-serving-cert\") pod \"authentication-operator-69f744f599-xmlr4\" (UID: \"d7dac5a2-b81a-483f-9686-a38012959621\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.811069 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c547720-450f-4714-827e-7694bd2b89f7-config\") pod \"console-operator-58897d9998-2l4fg\" (UID: \"8c547720-450f-4714-827e-7694bd2b89f7\") " pod="openshift-console-operator/console-operator-58897d9998-2l4fg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.811092 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.811371 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13cf25e6-413f-472e-8178-439df59c10d7-config\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.811623 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.811681 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8c547720-450f-4714-827e-7694bd2b89f7-trusted-ca\") pod \"console-operator-58897d9998-2l4fg\" (UID: \"8c547720-450f-4714-827e-7694bd2b89f7\") " pod="openshift-console-operator/console-operator-58897d9998-2l4fg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.812019 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5zzk5\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.813032 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c547720-450f-4714-827e-7694bd2b89f7-serving-cert\") pod \"console-operator-58897d9998-2l4fg\" (UID: \"8c547720-450f-4714-827e-7694bd2b89f7\") " pod="openshift-console-operator/console-operator-58897d9998-2l4fg" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.813402 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2612e265-77ef-4e49-ba43-dddb142b0856-machine-approver-tls\") pod \"machine-approver-56656f9798-ffq2r\" (UID: \"2612e265-77ef-4e49-ba43-dddb142b0856\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.810634 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.813825 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7dac5a2-b81a-483f-9686-a38012959621-config\") pod \"authentication-operator-69f744f599-xmlr4\" (UID: \"d7dac5a2-b81a-483f-9686-a38012959621\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.814223 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/13cf25e6-413f-472e-8178-439df59c10d7-etcd-serving-ca\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.815496 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7dac5a2-b81a-483f-9686-a38012959621-service-ca-bundle\") pod \"authentication-operator-69f744f599-xmlr4\" (UID: \"d7dac5a2-b81a-483f-9686-a38012959621\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.806445 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-audit-policies\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.816216 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/13cf25e6-413f-472e-8178-439df59c10d7-image-import-ca\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.816273 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.816276 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2612e265-77ef-4e49-ba43-dddb142b0856-config\") pod \"machine-approver-56656f9798-ffq2r\" (UID: \"2612e265-77ef-4e49-ba43-dddb142b0856\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.816479 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/13cf25e6-413f-472e-8178-439df59c10d7-encryption-config\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.816815 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.816895 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2612e265-77ef-4e49-ba43-dddb142b0856-auth-proxy-config\") pod \"machine-approver-56656f9798-ffq2r\" (UID: \"2612e265-77ef-4e49-ba43-dddb142b0856\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.817181 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.818470 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.818933 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-encryption-config\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.819001 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13cf25e6-413f-472e-8178-439df59c10d7-serving-cert\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.819453 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad93e803-b1ee-4401-a6c4-0f510b6b8735-serving-cert\") pod \"openshift-config-operator-7777fb866f-f2h2c\" (UID: \"ad93e803-b1ee-4401-a6c4-0f510b6b8735\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.819489 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5ad2a7-30fa-4e6c-9cb7-273f3d260318-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jbrt9\" (UID: \"ae5ad2a7-30fa-4e6c-9cb7-273f3d260318\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbrt9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.820127 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vhvpq"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.821104 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-vjddm"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.821857 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vjddm" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.822237 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-pz228"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.822925 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.823756 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pz228" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.827344 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13cf25e6-413f-472e-8178-439df59c10d7-etcd-client\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.827652 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.829757 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-etcd-client\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.830118 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.830290 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.830341 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.830656 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.831039 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.832401 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3abf57a9-2bd6-4f7f-ba75-5fe928201df8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vhvpq\" (UID: \"3abf57a9-2bd6-4f7f-ba75-5fe928201df8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.833851 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-serving-cert\") pod \"controller-manager-879f6c89f-5zzk5\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.843711 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.852897 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-m5zvm"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.856146 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m5zvm" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.856812 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-serving-cert\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.857217 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-74j8f"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.858607 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.859031 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z8mj5"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.860831 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kpzts"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.862031 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s9v7z"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.863507 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.863680 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rbdrt"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.865141 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.866280 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hqvn6"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.867453 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.868613 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5c84c"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.869919 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv2j"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.871349 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.872630 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.873989 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-rptfq"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.875156 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gd8pp"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.876368 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-t9gdn"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.877518 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khf8m"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.878873 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xpf6f"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.880471 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.881664 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.883148 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.884749 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rjbs8"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.886635 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.887524 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fmz28"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.888874 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.890483 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vjddm"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.892038 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c462h"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.893496 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lsp6v"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.894958 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-74j8f"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.896004 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m5zvm"] Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.903461 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.943274 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.963597 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 17:46:26 crc kubenswrapper[4680]: I0217 17:46:26.983870 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.003370 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.023808 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.043409 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.064712 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.084021 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.120280 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.120296 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.120466 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.122374 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.124292 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.154631 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.163257 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.184137 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.203911 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.223963 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.243637 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.263948 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.283679 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.324035 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.343993 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.364109 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.383793 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.404310 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.423500 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.443140 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.463710 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.483455 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.504106 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.524279 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.544681 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.564615 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.583742 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.603564 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.629340 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.643757 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.663864 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.684393 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.704551 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.723747 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.742396 4680 request.go:700] Waited for 1.002654466s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&limit=500&resourceVersion=0 Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.744543 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.764179 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.783387 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.805544 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.824659 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.843595 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.863456 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.883378 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.903644 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.925155 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.943862 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.964266 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 17:46:27 crc kubenswrapper[4680]: I0217 17:46:27.984409 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.004778 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.024980 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.044726 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.064522 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.084311 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.104012 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.104250 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.124778 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.144343 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.164020 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.184135 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.204605 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.223928 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.244564 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.264492 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.284287 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.304278 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.323428 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.357189 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hmtx\" (UniqueName: \"kubernetes.io/projected/b1c48715-1326-41fd-8b2e-007578010c80-kube-api-access-8hmtx\") pod \"oauth-openshift-558db77b4-qxkds\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.377078 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h27fz\" (UniqueName: \"kubernetes.io/projected/8c547720-450f-4714-827e-7694bd2b89f7-kube-api-access-h27fz\") pod \"console-operator-58897d9998-2l4fg\" (UID: \"8c547720-450f-4714-827e-7694bd2b89f7\") " pod="openshift-console-operator/console-operator-58897d9998-2l4fg" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.397916 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjchx\" (UniqueName: \"kubernetes.io/projected/0da17dc9-f5b1-4877-8242-28d767846f40-kube-api-access-bjchx\") pod \"dns-operator-744455d44c-m96mg\" (UID: \"0da17dc9-f5b1-4877-8242-28d767846f40\") " pod="openshift-dns-operator/dns-operator-744455d44c-m96mg" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.418170 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-565qs\" (UniqueName: \"kubernetes.io/projected/ae5ad2a7-30fa-4e6c-9cb7-273f3d260318-kube-api-access-565qs\") pod \"openshift-apiserver-operator-796bbdcf4f-jbrt9\" (UID: \"ae5ad2a7-30fa-4e6c-9cb7-273f3d260318\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbrt9" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.435841 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fq5k\" (UniqueName: \"kubernetes.io/projected/d7dac5a2-b81a-483f-9686-a38012959621-kube-api-access-4fq5k\") pod \"authentication-operator-69f744f599-xmlr4\" (UID: \"d7dac5a2-b81a-483f-9686-a38012959621\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.457463 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4ns2\" (UniqueName: \"kubernetes.io/projected/8e87e04c-2d08-4f13-8b5a-db27f849132e-kube-api-access-b4ns2\") pod \"downloads-7954f5f757-z9fcm\" (UID: \"8e87e04c-2d08-4f13-8b5a-db27f849132e\") " pod="openshift-console/downloads-7954f5f757-z9fcm" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.472581 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbrt9" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.477228 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szhmv\" (UniqueName: \"kubernetes.io/projected/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-kube-api-access-szhmv\") pod \"controller-manager-879f6c89f-5zzk5\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.496959 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cswfz\" (UniqueName: \"kubernetes.io/projected/ad93e803-b1ee-4401-a6c4-0f510b6b8735-kube-api-access-cswfz\") pod \"openshift-config-operator-7777fb866f-f2h2c\" (UID: \"ad93e803-b1ee-4401-a6c4-0f510b6b8735\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.497113 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.518336 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.522008 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dk2b\" (UniqueName: \"kubernetes.io/projected/13cf25e6-413f-472e-8178-439df59c10d7-kube-api-access-5dk2b\") pod \"apiserver-76f77b778f-dp7b9\" (UID: \"13cf25e6-413f-472e-8178-439df59c10d7\") " pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.550900 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwsmj\" (UniqueName: \"kubernetes.io/projected/2612e265-77ef-4e49-ba43-dddb142b0856-kube-api-access-dwsmj\") pod \"machine-approver-56656f9798-ffq2r\" (UID: \"2612e265-77ef-4e49-ba43-dddb142b0856\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.552638 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.564837 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sccxt\" (UniqueName: \"kubernetes.io/projected/3abf57a9-2bd6-4f7f-ba75-5fe928201df8-kube-api-access-sccxt\") pod \"machine-api-operator-5694c8668f-vhvpq\" (UID: \"3abf57a9-2bd6-4f7f-ba75-5fe928201df8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.568272 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2l4fg" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.579988 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cb58\" (UniqueName: \"kubernetes.io/projected/387f58c3-60a3-4f10-a29c-3bd2afca3e5d-kube-api-access-8cb58\") pod \"apiserver-7bbb656c7d-w88hs\" (UID: \"387f58c3-60a3-4f10-a29c-3bd2afca3e5d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.581804 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-z9fcm" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.583800 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.594344 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-m96mg" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.604880 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.630523 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.648498 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.659818 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.663834 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.682994 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.684106 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.696723 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.714594 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.716298 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.727149 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.744412 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbrt9"] Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.745598 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.756221 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.762439 4680 request.go:700] Waited for 1.905798571s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.764048 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.787058 4680 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.792130 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qxkds"] Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.827085 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.831886 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.834863 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xmlr4"] Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.866200 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.895572 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.904622 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.925703 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.966796 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 17:46:28 crc kubenswrapper[4680]: I0217 17:46:28.984500 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.039973 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c640310-31cf-4feb-a173-5688042c6b6d-registry-certificates\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040000 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgczf\" (UniqueName: \"kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-kube-api-access-kgczf\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040019 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-serving-cert\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040035 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-config\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040055 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-oauth-serving-cert\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040112 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-etcd-service-ca\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040128 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4a9f3446-4189-40c0-a189-1ad04182560f-metrics-tls\") pod \"ingress-operator-5b745b69d9-vdrlm\" (UID: \"4a9f3446-4189-40c0-a189-1ad04182560f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040143 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-etcd-ca\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040183 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-bound-sa-token\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040224 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b37c5d2-3899-45db-b4a3-976af83ba107-apiservice-cert\") pod \"packageserver-d55dfcdfc-4bdxb\" (UID: \"0b37c5d2-3899-45db-b4a3-976af83ba107\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040249 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0b37c5d2-3899-45db-b4a3-976af83ba107-tmpfs\") pod \"packageserver-d55dfcdfc-4bdxb\" (UID: \"0b37c5d2-3899-45db-b4a3-976af83ba107\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040284 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c640310-31cf-4feb-a173-5688042c6b6d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040453 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab-stats-auth\") pod \"router-default-5444994796-nkxs4\" (UID: \"aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab\") " pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040494 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f04bff26-dca4-4b98-82e8-582cf4305ee3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kpzts\" (UID: \"f04bff26-dca4-4b98-82e8-582cf4305ee3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kpzts" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040520 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06368790-6302-406a-9f22-a1c07b6f91a2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k7lq2\" (UID: \"06368790-6302-406a-9f22-a1c07b6f91a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040567 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c640310-31cf-4feb-a173-5688042c6b6d-trusted-ca\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040593 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c640310-31cf-4feb-a173-5688042c6b6d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040608 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-oauth-config\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040624 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/da6d5deb-7032-441b-b78a-efbecc6c25fd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5c84c\" (UID: \"da6d5deb-7032-441b-b78a-efbecc6c25fd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5c84c" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040641 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbwgj\" (UniqueName: \"kubernetes.io/projected/14cc3153-fc79-4a60-b371-eae93f02ba78-kube-api-access-jbwgj\") pod \"route-controller-manager-6576b87f9c-kb4mk\" (UID: \"14cc3153-fc79-4a60-b371-eae93f02ba78\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040655 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f04bff26-dca4-4b98-82e8-582cf4305ee3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kpzts\" (UID: \"f04bff26-dca4-4b98-82e8-582cf4305ee3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kpzts" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040679 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a9f3446-4189-40c0-a189-1ad04182560f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vdrlm\" (UID: \"4a9f3446-4189-40c0-a189-1ad04182560f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040734 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b37c5d2-3899-45db-b4a3-976af83ba107-webhook-cert\") pod \"packageserver-d55dfcdfc-4bdxb\" (UID: \"0b37c5d2-3899-45db-b4a3-976af83ba107\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040748 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a9f3446-4189-40c0-a189-1ad04182560f-trusted-ca\") pod \"ingress-operator-5b745b69d9-vdrlm\" (UID: \"4a9f3446-4189-40c0-a189-1ad04182560f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040762 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14cc3153-fc79-4a60-b371-eae93f02ba78-serving-cert\") pod \"route-controller-manager-6576b87f9c-kb4mk\" (UID: \"14cc3153-fc79-4a60-b371-eae93f02ba78\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040797 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab-service-ca-bundle\") pod \"router-default-5444994796-nkxs4\" (UID: \"aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab\") " pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040821 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj6gr\" (UniqueName: \"kubernetes.io/projected/4a9f3446-4189-40c0-a189-1ad04182560f-kube-api-access-lj6gr\") pod \"ingress-operator-5b745b69d9-vdrlm\" (UID: \"4a9f3446-4189-40c0-a189-1ad04182560f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040837 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-serving-cert\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040851 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06368790-6302-406a-9f22-a1c07b6f91a2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k7lq2\" (UID: \"06368790-6302-406a-9f22-a1c07b6f91a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040905 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwls7\" (UniqueName: \"kubernetes.io/projected/0b37c5d2-3899-45db-b4a3-976af83ba107-kube-api-access-cwls7\") pod \"packageserver-d55dfcdfc-4bdxb\" (UID: \"0b37c5d2-3899-45db-b4a3-976af83ba107\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040920 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgrtz\" (UniqueName: \"kubernetes.io/projected/aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab-kube-api-access-bgrtz\") pod \"router-default-5444994796-nkxs4\" (UID: \"aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab\") " pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040938 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2pm9\" (UniqueName: \"kubernetes.io/projected/06368790-6302-406a-9f22-a1c07b6f91a2-kube-api-access-z2pm9\") pod \"cluster-image-registry-operator-dc59b4c8b-k7lq2\" (UID: \"06368790-6302-406a-9f22-a1c07b6f91a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.040979 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-trusted-ca-bundle\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.041022 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.041044 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab-metrics-certs\") pod \"router-default-5444994796-nkxs4\" (UID: \"aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab\") " pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.041073 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-488nm\" (UniqueName: \"kubernetes.io/projected/da6d5deb-7032-441b-b78a-efbecc6c25fd-kube-api-access-488nm\") pod \"cluster-samples-operator-665b6dd947-5c84c\" (UID: \"da6d5deb-7032-441b-b78a-efbecc6c25fd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5c84c" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.041090 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8rds\" (UniqueName: \"kubernetes.io/projected/ed1c57d6-3991-4ebb-9149-63e38e19dc40-kube-api-access-q8rds\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.041106 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/06368790-6302-406a-9f22-a1c07b6f91a2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k7lq2\" (UID: \"06368790-6302-406a-9f22-a1c07b6f91a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.041162 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-config\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.041180 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-etcd-client\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.041195 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg4vt\" (UniqueName: \"kubernetes.io/projected/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-kube-api-access-kg4vt\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.041211 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab-default-certificate\") pod \"router-default-5444994796-nkxs4\" (UID: \"aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab\") " pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.041280 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-registry-tls\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.041296 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14cc3153-fc79-4a60-b371-eae93f02ba78-config\") pod \"route-controller-manager-6576b87f9c-kb4mk\" (UID: \"14cc3153-fc79-4a60-b371-eae93f02ba78\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.041347 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14cc3153-fc79-4a60-b371-eae93f02ba78-client-ca\") pod \"route-controller-manager-6576b87f9c-kb4mk\" (UID: \"14cc3153-fc79-4a60-b371-eae93f02ba78\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.041365 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f04bff26-dca4-4b98-82e8-582cf4305ee3-config\") pod \"kube-apiserver-operator-766d6c64bb-kpzts\" (UID: \"f04bff26-dca4-4b98-82e8-582cf4305ee3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kpzts" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.041382 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-service-ca\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: E0217 17:46:29.044684 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:29.544670443 +0000 UTC m=+119.095569122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.051421 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs"] Feb 17 17:46:29 crc kubenswrapper[4680]: W0217 17:46:29.074691 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod387f58c3_60a3_4f10_a29c_3bd2afca3e5d.slice/crio-6286c9d97b5405053eebd2539e39a4914e575bd009ccc1501d9a47eb55898c4a WatchSource:0}: Error finding container 6286c9d97b5405053eebd2539e39a4914e575bd009ccc1501d9a47eb55898c4a: Status 404 returned error can't find the container with id 6286c9d97b5405053eebd2539e39a4914e575bd009ccc1501d9a47eb55898c4a Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.094397 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-m96mg"] Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.104903 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c"] Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.126241 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2l4fg"] Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.132114 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5zzk5"] Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.141861 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142086 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14cc3153-fc79-4a60-b371-eae93f02ba78-client-ca\") pod \"route-controller-manager-6576b87f9c-kb4mk\" (UID: \"14cc3153-fc79-4a60-b371-eae93f02ba78\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142127 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6a027a-aad9-445b-8bab-ac0004a7c565-secret-volume\") pod \"collect-profiles-29522505-8tbt2\" (UID: \"4b6a027a-aad9-445b-8bab-ac0004a7c565\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142152 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2953a7b7-7e10-4037-94d5-08322d5b7521-registration-dir\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142171 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbfccbea-2fcb-4c86-b24e-963961a2cd3a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rbdrt\" (UID: \"cbfccbea-2fcb-4c86-b24e-963961a2cd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rbdrt" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142199 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-config\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142219 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2953a7b7-7e10-4037-94d5-08322d5b7521-mountpoint-dir\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142241 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bd944e7-4291-4cac-925b-2115e5bdb0ef-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fmz28\" (UID: \"4bd944e7-4291-4cac-925b-2115e5bdb0ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fmz28" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142259 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9dad1493-ca2b-4e2d-8c5f-8374b30deefa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x9rhw\" (UID: \"9dad1493-ca2b-4e2d-8c5f-8374b30deefa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142285 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-etcd-service-ca\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142307 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj77b\" (UniqueName: \"kubernetes.io/projected/2953a7b7-7e10-4037-94d5-08322d5b7521-kube-api-access-rj77b\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142371 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa215132-e3f4-460b-85d2-19d7dff33c1a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khf8m\" (UID: \"fa215132-e3f4-460b-85d2-19d7dff33c1a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khf8m" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142391 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/789c22f5-7717-4e0f-9df4-6bc17ed6a932-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gd8pp\" (UID: \"789c22f5-7717-4e0f-9df4-6bc17ed6a932\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gd8pp" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142414 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg2wr\" (UniqueName: \"kubernetes.io/projected/3beac5a6-c506-4360-bab3-277801f0fb05-kube-api-access-pg2wr\") pod \"machine-config-server-pz228\" (UID: \"3beac5a6-c506-4360-bab3-277801f0fb05\") " pod="openshift-machine-config-operator/machine-config-server-pz228" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142452 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-bound-sa-token\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142476 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3beac5a6-c506-4360-bab3-277801f0fb05-node-bootstrap-token\") pod \"machine-config-server-pz228\" (UID: \"3beac5a6-c506-4360-bab3-277801f0fb05\") " pod="openshift-machine-config-operator/machine-config-server-pz228" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142500 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b37c5d2-3899-45db-b4a3-976af83ba107-apiservice-cert\") pod \"packageserver-d55dfcdfc-4bdxb\" (UID: \"0b37c5d2-3899-45db-b4a3-976af83ba107\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142519 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwwlj\" (UniqueName: \"kubernetes.io/projected/feae09d1-ce01-4b1a-a198-f267e94d9190-kube-api-access-xwwlj\") pod \"ingress-canary-m5zvm\" (UID: \"feae09d1-ce01-4b1a-a198-f267e94d9190\") " pod="openshift-ingress-canary/ingress-canary-m5zvm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142545 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c640310-31cf-4feb-a173-5688042c6b6d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142569 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af8115d2-8e03-47f7-852c-4c7f08e1d089-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s9v7z\" (UID: \"af8115d2-8e03-47f7-852c-4c7f08e1d089\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s9v7z" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142594 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f04bff26-dca4-4b98-82e8-582cf4305ee3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kpzts\" (UID: \"f04bff26-dca4-4b98-82e8-582cf4305ee3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kpzts" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142615 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa215132-e3f4-460b-85d2-19d7dff33c1a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khf8m\" (UID: \"fa215132-e3f4-460b-85d2-19d7dff33c1a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khf8m" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142642 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06368790-6302-406a-9f22-a1c07b6f91a2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k7lq2\" (UID: \"06368790-6302-406a-9f22-a1c07b6f91a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142666 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jc7m\" (UniqueName: \"kubernetes.io/projected/210b630c-d271-49ec-9381-795e5e92cf4e-kube-api-access-5jc7m\") pod \"package-server-manager-789f6589d5-lsp6v\" (UID: \"210b630c-d271-49ec-9381-795e5e92cf4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lsp6v" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142688 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c640310-31cf-4feb-a173-5688042c6b6d-trusted-ca\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142710 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbwgj\" (UniqueName: \"kubernetes.io/projected/14cc3153-fc79-4a60-b371-eae93f02ba78-kube-api-access-jbwgj\") pod \"route-controller-manager-6576b87f9c-kb4mk\" (UID: \"14cc3153-fc79-4a60-b371-eae93f02ba78\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142729 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f04bff26-dca4-4b98-82e8-582cf4305ee3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kpzts\" (UID: \"f04bff26-dca4-4b98-82e8-582cf4305ee3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kpzts" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142748 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a9f3446-4189-40c0-a189-1ad04182560f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vdrlm\" (UID: \"4a9f3446-4189-40c0-a189-1ad04182560f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142771 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a9f3446-4189-40c0-a189-1ad04182560f-trusted-ca\") pod \"ingress-operator-5b745b69d9-vdrlm\" (UID: \"4a9f3446-4189-40c0-a189-1ad04182560f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142796 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx7tb\" (UniqueName: \"kubernetes.io/projected/67e5bb04-4920-4f51-a729-0af77b16490e-kube-api-access-gx7tb\") pod \"control-plane-machine-set-operator-78cbb6b69f-c462h\" (UID: \"67e5bb04-4920-4f51-a729-0af77b16490e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c462h" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142819 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brwmd\" (UniqueName: \"kubernetes.io/projected/a1816a7c-9c0b-49db-b812-dd9b2c605760-kube-api-access-brwmd\") pod \"migrator-59844c95c7-bzv2j\" (UID: \"a1816a7c-9c0b-49db-b812-dd9b2c605760\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv2j" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142844 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab-service-ca-bundle\") pod \"router-default-5444994796-nkxs4\" (UID: \"aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab\") " pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142877 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-serving-cert\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142899 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgrtz\" (UniqueName: \"kubernetes.io/projected/aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab-kube-api-access-bgrtz\") pod \"router-default-5444994796-nkxs4\" (UID: \"aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab\") " pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142929 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6cgl\" (UniqueName: \"kubernetes.io/projected/fb8e2703-d2bb-403c-9912-e7d9a1e2475d-kube-api-access-m6cgl\") pod \"catalog-operator-68c6474976-prsk2\" (UID: \"fb8e2703-d2bb-403c-9912-e7d9a1e2475d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142949 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt6tb\" (UniqueName: \"kubernetes.io/projected/cbfccbea-2fcb-4c86-b24e-963961a2cd3a-kube-api-access-vt6tb\") pod \"kube-storage-version-migrator-operator-b67b599dd-rbdrt\" (UID: \"cbfccbea-2fcb-4c86-b24e-963961a2cd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rbdrt" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.142971 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf8d275e-2c24-433b-84a2-0e415ab00c62-config\") pod \"service-ca-operator-777779d784-hqvn6\" (UID: \"bf8d275e-2c24-433b-84a2-0e415ab00c62\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqvn6" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143005 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab-metrics-certs\") pod \"router-default-5444994796-nkxs4\" (UID: \"aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab\") " pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143027 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-488nm\" (UniqueName: \"kubernetes.io/projected/da6d5deb-7032-441b-b78a-efbecc6c25fd-kube-api-access-488nm\") pod \"cluster-samples-operator-665b6dd947-5c84c\" (UID: \"da6d5deb-7032-441b-b78a-efbecc6c25fd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5c84c" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143047 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fb8e2703-d2bb-403c-9912-e7d9a1e2475d-profile-collector-cert\") pod \"catalog-operator-68c6474976-prsk2\" (UID: \"fb8e2703-d2bb-403c-9912-e7d9a1e2475d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143068 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9fns\" (UniqueName: \"kubernetes.io/projected/69d9c3bb-1b14-45a5-8aa1-29c8b79fd809-kube-api-access-h9fns\") pod \"service-ca-9c57cc56f-rptfq\" (UID: \"69d9c3bb-1b14-45a5-8aa1-29c8b79fd809\") " pod="openshift-service-ca/service-ca-9c57cc56f-rptfq" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143090 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/06368790-6302-406a-9f22-a1c07b6f91a2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k7lq2\" (UID: \"06368790-6302-406a-9f22-a1c07b6f91a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143113 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-etcd-client\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143132 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab-default-certificate\") pod \"router-default-5444994796-nkxs4\" (UID: \"aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab\") " pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143172 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17edf562-a2eb-4263-bbce-c65013fd645f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rjbs8\" (UID: \"17edf562-a2eb-4263-bbce-c65013fd645f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143199 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-registry-tls\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143221 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/05be763f-68c8-46fe-a8c5-f30ee5afc5db-proxy-tls\") pod \"machine-config-operator-74547568cd-gt6kd\" (UID: \"05be763f-68c8-46fe-a8c5-f30ee5afc5db\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143245 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f04bff26-dca4-4b98-82e8-582cf4305ee3-config\") pod \"kube-apiserver-operator-766d6c64bb-kpzts\" (UID: \"f04bff26-dca4-4b98-82e8-582cf4305ee3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kpzts" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143266 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-service-ca\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143288 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/69d9c3bb-1b14-45a5-8aa1-29c8b79fd809-signing-cabundle\") pod \"service-ca-9c57cc56f-rptfq\" (UID: \"69d9c3bb-1b14-45a5-8aa1-29c8b79fd809\") " pod="openshift-service-ca/service-ca-9c57cc56f-rptfq" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143345 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c640310-31cf-4feb-a173-5688042c6b6d-registry-certificates\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143368 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgczf\" (UniqueName: \"kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-kube-api-access-kgczf\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143391 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-serving-cert\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143414 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-oauth-serving-cert\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143438 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4a9f3446-4189-40c0-a189-1ad04182560f-metrics-tls\") pod \"ingress-operator-5b745b69d9-vdrlm\" (UID: \"4a9f3446-4189-40c0-a189-1ad04182560f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143458 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-etcd-ca\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143478 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/05be763f-68c8-46fe-a8c5-f30ee5afc5db-images\") pod \"machine-config-operator-74547568cd-gt6kd\" (UID: \"05be763f-68c8-46fe-a8c5-f30ee5afc5db\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143499 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05be763f-68c8-46fe-a8c5-f30ee5afc5db-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gt6kd\" (UID: \"05be763f-68c8-46fe-a8c5-f30ee5afc5db\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143525 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17edf562-a2eb-4263-bbce-c65013fd645f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rjbs8\" (UID: \"17edf562-a2eb-4263-bbce-c65013fd645f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143544 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3beac5a6-c506-4360-bab3-277801f0fb05-certs\") pod \"machine-config-server-pz228\" (UID: \"3beac5a6-c506-4360-bab3-277801f0fb05\") " pod="openshift-machine-config-operator/machine-config-server-pz228" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143564 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6a027a-aad9-445b-8bab-ac0004a7c565-config-volume\") pod \"collect-profiles-29522505-8tbt2\" (UID: \"4b6a027a-aad9-445b-8bab-ac0004a7c565\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143581 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa215132-e3f4-460b-85d2-19d7dff33c1a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khf8m\" (UID: \"fa215132-e3f4-460b-85d2-19d7dff33c1a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khf8m" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143600 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0b37c5d2-3899-45db-b4a3-976af83ba107-tmpfs\") pod \"packageserver-d55dfcdfc-4bdxb\" (UID: \"0b37c5d2-3899-45db-b4a3-976af83ba107\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143620 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/210b630c-d271-49ec-9381-795e5e92cf4e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lsp6v\" (UID: \"210b630c-d271-49ec-9381-795e5e92cf4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lsp6v" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143645 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmpm4\" (UniqueName: \"kubernetes.io/projected/17edf562-a2eb-4263-bbce-c65013fd645f-kube-api-access-wmpm4\") pod \"marketplace-operator-79b997595-rjbs8\" (UID: \"17edf562-a2eb-4263-bbce-c65013fd645f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143666 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnvdh\" (UniqueName: \"kubernetes.io/projected/5c3da943-badb-4d63-a3e2-541a9ecf9347-kube-api-access-hnvdh\") pod \"olm-operator-6b444d44fb-fxsf8\" (UID: \"5c3da943-badb-4d63-a3e2-541a9ecf9347\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143687 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8115d2-8e03-47f7-852c-4c7f08e1d089-config\") pod \"kube-controller-manager-operator-78b949d7b-s9v7z\" (UID: \"af8115d2-8e03-47f7-852c-4c7f08e1d089\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s9v7z" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143706 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fb8e2703-d2bb-403c-9912-e7d9a1e2475d-srv-cert\") pod \"catalog-operator-68c6474976-prsk2\" (UID: \"fb8e2703-d2bb-403c-9912-e7d9a1e2475d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143727 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2953a7b7-7e10-4037-94d5-08322d5b7521-plugins-dir\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143744 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5c3da943-badb-4d63-a3e2-541a9ecf9347-srv-cert\") pod \"olm-operator-6b444d44fb-fxsf8\" (UID: \"5c3da943-badb-4d63-a3e2-541a9ecf9347\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143763 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/80bd8e1f-fde7-45e5-a482-8dd2d336bb3e-metrics-tls\") pod \"dns-default-vjddm\" (UID: \"80bd8e1f-fde7-45e5-a482-8dd2d336bb3e\") " pod="openshift-dns/dns-default-vjddm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143794 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab-stats-auth\") pod \"router-default-5444994796-nkxs4\" (UID: \"aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab\") " pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143817 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2953a7b7-7e10-4037-94d5-08322d5b7521-csi-data-dir\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143838 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/67e5bb04-4920-4f51-a729-0af77b16490e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c462h\" (UID: \"67e5bb04-4920-4f51-a729-0af77b16490e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c462h" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143858 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n68nh\" (UniqueName: \"kubernetes.io/projected/80bd8e1f-fde7-45e5-a482-8dd2d336bb3e-kube-api-access-n68nh\") pod \"dns-default-vjddm\" (UID: \"80bd8e1f-fde7-45e5-a482-8dd2d336bb3e\") " pod="openshift-dns/dns-default-vjddm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143880 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnpfh\" (UniqueName: \"kubernetes.io/projected/9dad1493-ca2b-4e2d-8c5f-8374b30deefa-kube-api-access-hnpfh\") pod \"machine-config-controller-84d6567774-x9rhw\" (UID: \"9dad1493-ca2b-4e2d-8c5f-8374b30deefa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143900 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c640310-31cf-4feb-a173-5688042c6b6d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143919 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-oauth-config\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143938 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/da6d5deb-7032-441b-b78a-efbecc6c25fd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5c84c\" (UID: \"da6d5deb-7032-441b-b78a-efbecc6c25fd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5c84c" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143961 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9dad1493-ca2b-4e2d-8c5f-8374b30deefa-proxy-tls\") pod \"machine-config-controller-84d6567774-x9rhw\" (UID: \"9dad1493-ca2b-4e2d-8c5f-8374b30deefa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.143983 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b37c5d2-3899-45db-b4a3-976af83ba107-webhook-cert\") pod \"packageserver-d55dfcdfc-4bdxb\" (UID: \"0b37c5d2-3899-45db-b4a3-976af83ba107\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144001 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14cc3153-fc79-4a60-b371-eae93f02ba78-serving-cert\") pod \"route-controller-manager-6576b87f9c-kb4mk\" (UID: \"14cc3153-fc79-4a60-b371-eae93f02ba78\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144022 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsb82\" (UniqueName: \"kubernetes.io/projected/bf8d275e-2c24-433b-84a2-0e415ab00c62-kube-api-access-jsb82\") pod \"service-ca-operator-777779d784-hqvn6\" (UID: \"bf8d275e-2c24-433b-84a2-0e415ab00c62\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqvn6" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144044 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj6gr\" (UniqueName: \"kubernetes.io/projected/4a9f3446-4189-40c0-a189-1ad04182560f-kube-api-access-lj6gr\") pod \"ingress-operator-5b745b69d9-vdrlm\" (UID: \"4a9f3446-4189-40c0-a189-1ad04182560f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144066 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06368790-6302-406a-9f22-a1c07b6f91a2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k7lq2\" (UID: \"06368790-6302-406a-9f22-a1c07b6f91a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144087 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k69bn\" (UniqueName: \"kubernetes.io/projected/05be763f-68c8-46fe-a8c5-f30ee5afc5db-kube-api-access-k69bn\") pod \"machine-config-operator-74547568cd-gt6kd\" (UID: \"05be763f-68c8-46fe-a8c5-f30ee5afc5db\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144113 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwls7\" (UniqueName: \"kubernetes.io/projected/0b37c5d2-3899-45db-b4a3-976af83ba107-kube-api-access-cwls7\") pod \"packageserver-d55dfcdfc-4bdxb\" (UID: \"0b37c5d2-3899-45db-b4a3-976af83ba107\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144133 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bd944e7-4291-4cac-925b-2115e5bdb0ef-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fmz28\" (UID: \"4bd944e7-4291-4cac-925b-2115e5bdb0ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fmz28" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144169 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2pm9\" (UniqueName: \"kubernetes.io/projected/06368790-6302-406a-9f22-a1c07b6f91a2-kube-api-access-z2pm9\") pod \"cluster-image-registry-operator-dc59b4c8b-k7lq2\" (UID: \"06368790-6302-406a-9f22-a1c07b6f91a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144187 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2953a7b7-7e10-4037-94d5-08322d5b7521-socket-dir\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144213 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-trusted-ca-bundle\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144233 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh4rz\" (UniqueName: \"kubernetes.io/projected/4b6a027a-aad9-445b-8bab-ac0004a7c565-kube-api-access-hh4rz\") pod \"collect-profiles-29522505-8tbt2\" (UID: \"4b6a027a-aad9-445b-8bab-ac0004a7c565\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144254 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fjjm\" (UniqueName: \"kubernetes.io/projected/4bd944e7-4291-4cac-925b-2115e5bdb0ef-kube-api-access-8fjjm\") pod \"openshift-controller-manager-operator-756b6f6bc6-fmz28\" (UID: \"4bd944e7-4291-4cac-925b-2115e5bdb0ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fmz28" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144273 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5c3da943-badb-4d63-a3e2-541a9ecf9347-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fxsf8\" (UID: \"5c3da943-badb-4d63-a3e2-541a9ecf9347\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144302 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af8115d2-8e03-47f7-852c-4c7f08e1d089-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s9v7z\" (UID: \"af8115d2-8e03-47f7-852c-4c7f08e1d089\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s9v7z" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144341 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbfccbea-2fcb-4c86-b24e-963961a2cd3a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rbdrt\" (UID: \"cbfccbea-2fcb-4c86-b24e-963961a2cd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rbdrt" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144363 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/69d9c3bb-1b14-45a5-8aa1-29c8b79fd809-signing-key\") pod \"service-ca-9c57cc56f-rptfq\" (UID: \"69d9c3bb-1b14-45a5-8aa1-29c8b79fd809\") " pod="openshift-service-ca/service-ca-9c57cc56f-rptfq" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144394 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf8d275e-2c24-433b-84a2-0e415ab00c62-serving-cert\") pod \"service-ca-operator-777779d784-hqvn6\" (UID: \"bf8d275e-2c24-433b-84a2-0e415ab00c62\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqvn6" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144413 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/feae09d1-ce01-4b1a-a198-f267e94d9190-cert\") pod \"ingress-canary-m5zvm\" (UID: \"feae09d1-ce01-4b1a-a198-f267e94d9190\") " pod="openshift-ingress-canary/ingress-canary-m5zvm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144435 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8rds\" (UniqueName: \"kubernetes.io/projected/ed1c57d6-3991-4ebb-9149-63e38e19dc40-kube-api-access-q8rds\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144454 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-config\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144477 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg4vt\" (UniqueName: \"kubernetes.io/projected/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-kube-api-access-kg4vt\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144499 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p59pb\" (UniqueName: \"kubernetes.io/projected/789c22f5-7717-4e0f-9df4-6bc17ed6a932-kube-api-access-p59pb\") pod \"multus-admission-controller-857f4d67dd-gd8pp\" (UID: \"789c22f5-7717-4e0f-9df4-6bc17ed6a932\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gd8pp" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144522 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14cc3153-fc79-4a60-b371-eae93f02ba78-config\") pod \"route-controller-manager-6576b87f9c-kb4mk\" (UID: \"14cc3153-fc79-4a60-b371-eae93f02ba78\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.144541 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80bd8e1f-fde7-45e5-a482-8dd2d336bb3e-config-volume\") pod \"dns-default-vjddm\" (UID: \"80bd8e1f-fde7-45e5-a482-8dd2d336bb3e\") " pod="openshift-dns/dns-default-vjddm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.145188 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f04bff26-dca4-4b98-82e8-582cf4305ee3-config\") pod \"kube-apiserver-operator-766d6c64bb-kpzts\" (UID: \"f04bff26-dca4-4b98-82e8-582cf4305ee3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kpzts" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.145285 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-service-ca\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.146362 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c640310-31cf-4feb-a173-5688042c6b6d-registry-certificates\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.146560 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c640310-31cf-4feb-a173-5688042c6b6d-trusted-ca\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.146629 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab-service-ca-bundle\") pod \"router-default-5444994796-nkxs4\" (UID: \"aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab\") " pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:29 crc kubenswrapper[4680]: E0217 17:46:29.147205 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:29.644883046 +0000 UTC m=+119.195781725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.147595 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a9f3446-4189-40c0-a189-1ad04182560f-trusted-ca\") pod \"ingress-operator-5b745b69d9-vdrlm\" (UID: \"4a9f3446-4189-40c0-a189-1ad04182560f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.148413 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14cc3153-fc79-4a60-b371-eae93f02ba78-client-ca\") pod \"route-controller-manager-6576b87f9c-kb4mk\" (UID: \"14cc3153-fc79-4a60-b371-eae93f02ba78\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.148949 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-config\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.149491 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-etcd-service-ca\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.151018 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-serving-cert\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.151030 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-etcd-client\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.151291 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c640310-31cf-4feb-a173-5688042c6b6d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.151785 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-oauth-serving-cert\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.151825 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab-metrics-certs\") pod \"router-default-5444994796-nkxs4\" (UID: \"aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab\") " pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.151864 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/06368790-6302-406a-9f22-a1c07b6f91a2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k7lq2\" (UID: \"06368790-6302-406a-9f22-a1c07b6f91a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.152241 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-etcd-ca\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.152537 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-trusted-ca-bundle\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.152569 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0b37c5d2-3899-45db-b4a3-976af83ba107-tmpfs\") pod \"packageserver-d55dfcdfc-4bdxb\" (UID: \"0b37c5d2-3899-45db-b4a3-976af83ba107\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.152615 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-oauth-config\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.152952 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b37c5d2-3899-45db-b4a3-976af83ba107-apiservice-cert\") pod \"packageserver-d55dfcdfc-4bdxb\" (UID: \"0b37c5d2-3899-45db-b4a3-976af83ba107\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.153046 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c640310-31cf-4feb-a173-5688042c6b6d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.153500 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab-default-certificate\") pod \"router-default-5444994796-nkxs4\" (UID: \"aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab\") " pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.154185 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-config\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.154441 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14cc3153-fc79-4a60-b371-eae93f02ba78-config\") pod \"route-controller-manager-6576b87f9c-kb4mk\" (UID: \"14cc3153-fc79-4a60-b371-eae93f02ba78\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.154774 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4a9f3446-4189-40c0-a189-1ad04182560f-metrics-tls\") pod \"ingress-operator-5b745b69d9-vdrlm\" (UID: \"4a9f3446-4189-40c0-a189-1ad04182560f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.155058 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06368790-6302-406a-9f22-a1c07b6f91a2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k7lq2\" (UID: \"06368790-6302-406a-9f22-a1c07b6f91a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.155975 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/da6d5deb-7032-441b-b78a-efbecc6c25fd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5c84c\" (UID: \"da6d5deb-7032-441b-b78a-efbecc6c25fd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5c84c" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.155979 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-registry-tls\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.156245 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-serving-cert\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.158490 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab-stats-auth\") pod \"router-default-5444994796-nkxs4\" (UID: \"aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab\") " pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.158792 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f04bff26-dca4-4b98-82e8-582cf4305ee3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kpzts\" (UID: \"f04bff26-dca4-4b98-82e8-582cf4305ee3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kpzts" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.158833 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14cc3153-fc79-4a60-b371-eae93f02ba78-serving-cert\") pod \"route-controller-manager-6576b87f9c-kb4mk\" (UID: \"14cc3153-fc79-4a60-b371-eae93f02ba78\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.159295 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b37c5d2-3899-45db-b4a3-976af83ba107-webhook-cert\") pod \"packageserver-d55dfcdfc-4bdxb\" (UID: \"0b37c5d2-3899-45db-b4a3-976af83ba107\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.205144 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-z9fcm"] Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.210085 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbwgj\" (UniqueName: \"kubernetes.io/projected/14cc3153-fc79-4a60-b371-eae93f02ba78-kube-api-access-jbwgj\") pod \"route-controller-manager-6576b87f9c-kb4mk\" (UID: \"14cc3153-fc79-4a60-b371-eae93f02ba78\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.217144 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dp7b9"] Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.223537 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-488nm\" (UniqueName: \"kubernetes.io/projected/da6d5deb-7032-441b-b78a-efbecc6c25fd-kube-api-access-488nm\") pod \"cluster-samples-operator-665b6dd947-5c84c\" (UID: \"da6d5deb-7032-441b-b78a-efbecc6c25fd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5c84c" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.228208 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vhvpq"] Feb 17 17:46:29 crc kubenswrapper[4680]: W0217 17:46:29.233097 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cf25e6_413f_472e_8178_439df59c10d7.slice/crio-703c0746645fec7e7f185b7c664dedf12b66a5d40eff7f0d3621c46bd0925b67 WatchSource:0}: Error finding container 703c0746645fec7e7f185b7c664dedf12b66a5d40eff7f0d3621c46bd0925b67: Status 404 returned error can't find the container with id 703c0746645fec7e7f185b7c664dedf12b66a5d40eff7f0d3621c46bd0925b67 Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.247560 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsb82\" (UniqueName: \"kubernetes.io/projected/bf8d275e-2c24-433b-84a2-0e415ab00c62-kube-api-access-jsb82\") pod \"service-ca-operator-777779d784-hqvn6\" (UID: \"bf8d275e-2c24-433b-84a2-0e415ab00c62\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqvn6" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.247623 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k69bn\" (UniqueName: \"kubernetes.io/projected/05be763f-68c8-46fe-a8c5-f30ee5afc5db-kube-api-access-k69bn\") pod \"machine-config-operator-74547568cd-gt6kd\" (UID: \"05be763f-68c8-46fe-a8c5-f30ee5afc5db\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.247668 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bd944e7-4291-4cac-925b-2115e5bdb0ef-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fmz28\" (UID: \"4bd944e7-4291-4cac-925b-2115e5bdb0ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fmz28" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.247701 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2953a7b7-7e10-4037-94d5-08322d5b7521-socket-dir\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.247731 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh4rz\" (UniqueName: \"kubernetes.io/projected/4b6a027a-aad9-445b-8bab-ac0004a7c565-kube-api-access-hh4rz\") pod \"collect-profiles-29522505-8tbt2\" (UID: \"4b6a027a-aad9-445b-8bab-ac0004a7c565\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.247747 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fjjm\" (UniqueName: \"kubernetes.io/projected/4bd944e7-4291-4cac-925b-2115e5bdb0ef-kube-api-access-8fjjm\") pod \"openshift-controller-manager-operator-756b6f6bc6-fmz28\" (UID: \"4bd944e7-4291-4cac-925b-2115e5bdb0ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fmz28" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.247767 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5c3da943-badb-4d63-a3e2-541a9ecf9347-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fxsf8\" (UID: \"5c3da943-badb-4d63-a3e2-541a9ecf9347\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.247791 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.247811 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af8115d2-8e03-47f7-852c-4c7f08e1d089-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s9v7z\" (UID: \"af8115d2-8e03-47f7-852c-4c7f08e1d089\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s9v7z" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.247836 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbfccbea-2fcb-4c86-b24e-963961a2cd3a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rbdrt\" (UID: \"cbfccbea-2fcb-4c86-b24e-963961a2cd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rbdrt" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.247864 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/69d9c3bb-1b14-45a5-8aa1-29c8b79fd809-signing-key\") pod \"service-ca-9c57cc56f-rptfq\" (UID: \"69d9c3bb-1b14-45a5-8aa1-29c8b79fd809\") " pod="openshift-service-ca/service-ca-9c57cc56f-rptfq" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.247907 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf8d275e-2c24-433b-84a2-0e415ab00c62-serving-cert\") pod \"service-ca-operator-777779d784-hqvn6\" (UID: \"bf8d275e-2c24-433b-84a2-0e415ab00c62\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqvn6" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.247923 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/feae09d1-ce01-4b1a-a198-f267e94d9190-cert\") pod \"ingress-canary-m5zvm\" (UID: \"feae09d1-ce01-4b1a-a198-f267e94d9190\") " pod="openshift-ingress-canary/ingress-canary-m5zvm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.247965 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p59pb\" (UniqueName: \"kubernetes.io/projected/789c22f5-7717-4e0f-9df4-6bc17ed6a932-kube-api-access-p59pb\") pod \"multus-admission-controller-857f4d67dd-gd8pp\" (UID: \"789c22f5-7717-4e0f-9df4-6bc17ed6a932\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gd8pp" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.247986 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80bd8e1f-fde7-45e5-a482-8dd2d336bb3e-config-volume\") pod \"dns-default-vjddm\" (UID: \"80bd8e1f-fde7-45e5-a482-8dd2d336bb3e\") " pod="openshift-dns/dns-default-vjddm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248007 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6a027a-aad9-445b-8bab-ac0004a7c565-secret-volume\") pod \"collect-profiles-29522505-8tbt2\" (UID: \"4b6a027a-aad9-445b-8bab-ac0004a7c565\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248023 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2953a7b7-7e10-4037-94d5-08322d5b7521-registration-dir\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248044 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbfccbea-2fcb-4c86-b24e-963961a2cd3a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rbdrt\" (UID: \"cbfccbea-2fcb-4c86-b24e-963961a2cd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rbdrt" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248065 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2953a7b7-7e10-4037-94d5-08322d5b7521-mountpoint-dir\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248089 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bd944e7-4291-4cac-925b-2115e5bdb0ef-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fmz28\" (UID: \"4bd944e7-4291-4cac-925b-2115e5bdb0ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fmz28" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248107 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9dad1493-ca2b-4e2d-8c5f-8374b30deefa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x9rhw\" (UID: \"9dad1493-ca2b-4e2d-8c5f-8374b30deefa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248126 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj77b\" (UniqueName: \"kubernetes.io/projected/2953a7b7-7e10-4037-94d5-08322d5b7521-kube-api-access-rj77b\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248146 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa215132-e3f4-460b-85d2-19d7dff33c1a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khf8m\" (UID: \"fa215132-e3f4-460b-85d2-19d7dff33c1a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khf8m" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248166 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/789c22f5-7717-4e0f-9df4-6bc17ed6a932-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gd8pp\" (UID: \"789c22f5-7717-4e0f-9df4-6bc17ed6a932\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gd8pp" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248186 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg2wr\" (UniqueName: \"kubernetes.io/projected/3beac5a6-c506-4360-bab3-277801f0fb05-kube-api-access-pg2wr\") pod \"machine-config-server-pz228\" (UID: \"3beac5a6-c506-4360-bab3-277801f0fb05\") " pod="openshift-machine-config-operator/machine-config-server-pz228" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248243 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3beac5a6-c506-4360-bab3-277801f0fb05-node-bootstrap-token\") pod \"machine-config-server-pz228\" (UID: \"3beac5a6-c506-4360-bab3-277801f0fb05\") " pod="openshift-machine-config-operator/machine-config-server-pz228" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248265 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwwlj\" (UniqueName: \"kubernetes.io/projected/feae09d1-ce01-4b1a-a198-f267e94d9190-kube-api-access-xwwlj\") pod \"ingress-canary-m5zvm\" (UID: \"feae09d1-ce01-4b1a-a198-f267e94d9190\") " pod="openshift-ingress-canary/ingress-canary-m5zvm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248291 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af8115d2-8e03-47f7-852c-4c7f08e1d089-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s9v7z\" (UID: \"af8115d2-8e03-47f7-852c-4c7f08e1d089\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s9v7z" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248304 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2953a7b7-7e10-4037-94d5-08322d5b7521-socket-dir\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248334 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa215132-e3f4-460b-85d2-19d7dff33c1a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khf8m\" (UID: \"fa215132-e3f4-460b-85d2-19d7dff33c1a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khf8m" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248360 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jc7m\" (UniqueName: \"kubernetes.io/projected/210b630c-d271-49ec-9381-795e5e92cf4e-kube-api-access-5jc7m\") pod \"package-server-manager-789f6589d5-lsp6v\" (UID: \"210b630c-d271-49ec-9381-795e5e92cf4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lsp6v" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248390 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx7tb\" (UniqueName: \"kubernetes.io/projected/67e5bb04-4920-4f51-a729-0af77b16490e-kube-api-access-gx7tb\") pod \"control-plane-machine-set-operator-78cbb6b69f-c462h\" (UID: \"67e5bb04-4920-4f51-a729-0af77b16490e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c462h" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248409 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brwmd\" (UniqueName: \"kubernetes.io/projected/a1816a7c-9c0b-49db-b812-dd9b2c605760-kube-api-access-brwmd\") pod \"migrator-59844c95c7-bzv2j\" (UID: \"a1816a7c-9c0b-49db-b812-dd9b2c605760\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv2j" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248450 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6cgl\" (UniqueName: \"kubernetes.io/projected/fb8e2703-d2bb-403c-9912-e7d9a1e2475d-kube-api-access-m6cgl\") pod \"catalog-operator-68c6474976-prsk2\" (UID: \"fb8e2703-d2bb-403c-9912-e7d9a1e2475d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248472 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt6tb\" (UniqueName: \"kubernetes.io/projected/cbfccbea-2fcb-4c86-b24e-963961a2cd3a-kube-api-access-vt6tb\") pod \"kube-storage-version-migrator-operator-b67b599dd-rbdrt\" (UID: \"cbfccbea-2fcb-4c86-b24e-963961a2cd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rbdrt" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248496 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf8d275e-2c24-433b-84a2-0e415ab00c62-config\") pod \"service-ca-operator-777779d784-hqvn6\" (UID: \"bf8d275e-2c24-433b-84a2-0e415ab00c62\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqvn6" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248517 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fb8e2703-d2bb-403c-9912-e7d9a1e2475d-profile-collector-cert\") pod \"catalog-operator-68c6474976-prsk2\" (UID: \"fb8e2703-d2bb-403c-9912-e7d9a1e2475d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248538 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9fns\" (UniqueName: \"kubernetes.io/projected/69d9c3bb-1b14-45a5-8aa1-29c8b79fd809-kube-api-access-h9fns\") pod \"service-ca-9c57cc56f-rptfq\" (UID: \"69d9c3bb-1b14-45a5-8aa1-29c8b79fd809\") " pod="openshift-service-ca/service-ca-9c57cc56f-rptfq" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248554 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17edf562-a2eb-4263-bbce-c65013fd645f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rjbs8\" (UID: \"17edf562-a2eb-4263-bbce-c65013fd645f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248583 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/05be763f-68c8-46fe-a8c5-f30ee5afc5db-proxy-tls\") pod \"machine-config-operator-74547568cd-gt6kd\" (UID: \"05be763f-68c8-46fe-a8c5-f30ee5afc5db\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248603 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/69d9c3bb-1b14-45a5-8aa1-29c8b79fd809-signing-cabundle\") pod \"service-ca-9c57cc56f-rptfq\" (UID: \"69d9c3bb-1b14-45a5-8aa1-29c8b79fd809\") " pod="openshift-service-ca/service-ca-9c57cc56f-rptfq" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248641 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/05be763f-68c8-46fe-a8c5-f30ee5afc5db-images\") pod \"machine-config-operator-74547568cd-gt6kd\" (UID: \"05be763f-68c8-46fe-a8c5-f30ee5afc5db\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248657 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05be763f-68c8-46fe-a8c5-f30ee5afc5db-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gt6kd\" (UID: \"05be763f-68c8-46fe-a8c5-f30ee5afc5db\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248678 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17edf562-a2eb-4263-bbce-c65013fd645f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rjbs8\" (UID: \"17edf562-a2eb-4263-bbce-c65013fd645f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248696 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3beac5a6-c506-4360-bab3-277801f0fb05-certs\") pod \"machine-config-server-pz228\" (UID: \"3beac5a6-c506-4360-bab3-277801f0fb05\") " pod="openshift-machine-config-operator/machine-config-server-pz228" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248716 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6a027a-aad9-445b-8bab-ac0004a7c565-config-volume\") pod \"collect-profiles-29522505-8tbt2\" (UID: \"4b6a027a-aad9-445b-8bab-ac0004a7c565\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248741 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa215132-e3f4-460b-85d2-19d7dff33c1a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khf8m\" (UID: \"fa215132-e3f4-460b-85d2-19d7dff33c1a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khf8m" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248757 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/210b630c-d271-49ec-9381-795e5e92cf4e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lsp6v\" (UID: \"210b630c-d271-49ec-9381-795e5e92cf4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lsp6v" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248778 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmpm4\" (UniqueName: \"kubernetes.io/projected/17edf562-a2eb-4263-bbce-c65013fd645f-kube-api-access-wmpm4\") pod \"marketplace-operator-79b997595-rjbs8\" (UID: \"17edf562-a2eb-4263-bbce-c65013fd645f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248797 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnvdh\" (UniqueName: \"kubernetes.io/projected/5c3da943-badb-4d63-a3e2-541a9ecf9347-kube-api-access-hnvdh\") pod \"olm-operator-6b444d44fb-fxsf8\" (UID: \"5c3da943-badb-4d63-a3e2-541a9ecf9347\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248815 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8115d2-8e03-47f7-852c-4c7f08e1d089-config\") pod \"kube-controller-manager-operator-78b949d7b-s9v7z\" (UID: \"af8115d2-8e03-47f7-852c-4c7f08e1d089\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s9v7z" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248834 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fb8e2703-d2bb-403c-9912-e7d9a1e2475d-srv-cert\") pod \"catalog-operator-68c6474976-prsk2\" (UID: \"fb8e2703-d2bb-403c-9912-e7d9a1e2475d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248862 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2953a7b7-7e10-4037-94d5-08322d5b7521-plugins-dir\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248880 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5c3da943-badb-4d63-a3e2-541a9ecf9347-srv-cert\") pod \"olm-operator-6b444d44fb-fxsf8\" (UID: \"5c3da943-badb-4d63-a3e2-541a9ecf9347\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248896 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/80bd8e1f-fde7-45e5-a482-8dd2d336bb3e-metrics-tls\") pod \"dns-default-vjddm\" (UID: \"80bd8e1f-fde7-45e5-a482-8dd2d336bb3e\") " pod="openshift-dns/dns-default-vjddm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248924 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2953a7b7-7e10-4037-94d5-08322d5b7521-csi-data-dir\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248944 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/67e5bb04-4920-4f51-a729-0af77b16490e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c462h\" (UID: \"67e5bb04-4920-4f51-a729-0af77b16490e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c462h" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248968 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n68nh\" (UniqueName: \"kubernetes.io/projected/80bd8e1f-fde7-45e5-a482-8dd2d336bb3e-kube-api-access-n68nh\") pod \"dns-default-vjddm\" (UID: \"80bd8e1f-fde7-45e5-a482-8dd2d336bb3e\") " pod="openshift-dns/dns-default-vjddm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.248990 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnpfh\" (UniqueName: \"kubernetes.io/projected/9dad1493-ca2b-4e2d-8c5f-8374b30deefa-kube-api-access-hnpfh\") pod \"machine-config-controller-84d6567774-x9rhw\" (UID: \"9dad1493-ca2b-4e2d-8c5f-8374b30deefa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.249012 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9dad1493-ca2b-4e2d-8c5f-8374b30deefa-proxy-tls\") pod \"machine-config-controller-84d6567774-x9rhw\" (UID: \"9dad1493-ca2b-4e2d-8c5f-8374b30deefa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.249198 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2953a7b7-7e10-4037-94d5-08322d5b7521-mountpoint-dir\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.251515 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bd944e7-4291-4cac-925b-2115e5bdb0ef-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fmz28\" (UID: \"4bd944e7-4291-4cac-925b-2115e5bdb0ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fmz28" Feb 17 17:46:29 crc kubenswrapper[4680]: E0217 17:46:29.252616 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:29.752602711 +0000 UTC m=+119.303501390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.252979 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9dad1493-ca2b-4e2d-8c5f-8374b30deefa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x9rhw\" (UID: \"9dad1493-ca2b-4e2d-8c5f-8374b30deefa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.253604 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa215132-e3f4-460b-85d2-19d7dff33c1a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khf8m\" (UID: \"fa215132-e3f4-460b-85d2-19d7dff33c1a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khf8m" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.255708 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5c3da943-badb-4d63-a3e2-541a9ecf9347-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fxsf8\" (UID: \"5c3da943-badb-4d63-a3e2-541a9ecf9347\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.256435 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80bd8e1f-fde7-45e5-a482-8dd2d336bb3e-config-volume\") pod \"dns-default-vjddm\" (UID: \"80bd8e1f-fde7-45e5-a482-8dd2d336bb3e\") " pod="openshift-dns/dns-default-vjddm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.257423 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2953a7b7-7e10-4037-94d5-08322d5b7521-registration-dir\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.258761 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17edf562-a2eb-4263-bbce-c65013fd645f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rjbs8\" (UID: \"17edf562-a2eb-4263-bbce-c65013fd645f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.261700 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf8d275e-2c24-433b-84a2-0e415ab00c62-config\") pod \"service-ca-operator-777779d784-hqvn6\" (UID: \"bf8d275e-2c24-433b-84a2-0e415ab00c62\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqvn6" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.262275 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbfccbea-2fcb-4c86-b24e-963961a2cd3a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rbdrt\" (UID: \"cbfccbea-2fcb-4c86-b24e-963961a2cd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rbdrt" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.262298 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/69d9c3bb-1b14-45a5-8aa1-29c8b79fd809-signing-cabundle\") pod \"service-ca-9c57cc56f-rptfq\" (UID: \"69d9c3bb-1b14-45a5-8aa1-29c8b79fd809\") " pod="openshift-service-ca/service-ca-9c57cc56f-rptfq" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.262864 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bd944e7-4291-4cac-925b-2115e5bdb0ef-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fmz28\" (UID: \"4bd944e7-4291-4cac-925b-2115e5bdb0ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fmz28" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.263227 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgczf\" (UniqueName: \"kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-kube-api-access-kgczf\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.263412 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/05be763f-68c8-46fe-a8c5-f30ee5afc5db-images\") pod \"machine-config-operator-74547568cd-gt6kd\" (UID: \"05be763f-68c8-46fe-a8c5-f30ee5afc5db\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.263831 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05be763f-68c8-46fe-a8c5-f30ee5afc5db-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gt6kd\" (UID: \"05be763f-68c8-46fe-a8c5-f30ee5afc5db\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.264455 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8115d2-8e03-47f7-852c-4c7f08e1d089-config\") pod \"kube-controller-manager-operator-78b949d7b-s9v7z\" (UID: \"af8115d2-8e03-47f7-852c-4c7f08e1d089\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s9v7z" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.265140 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2953a7b7-7e10-4037-94d5-08322d5b7521-plugins-dir\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.268793 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/feae09d1-ce01-4b1a-a198-f267e94d9190-cert\") pod \"ingress-canary-m5zvm\" (UID: \"feae09d1-ce01-4b1a-a198-f267e94d9190\") " pod="openshift-ingress-canary/ingress-canary-m5zvm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.269137 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/789c22f5-7717-4e0f-9df4-6bc17ed6a932-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gd8pp\" (UID: \"789c22f5-7717-4e0f-9df4-6bc17ed6a932\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gd8pp" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.269796 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/05be763f-68c8-46fe-a8c5-f30ee5afc5db-proxy-tls\") pod \"machine-config-operator-74547568cd-gt6kd\" (UID: \"05be763f-68c8-46fe-a8c5-f30ee5afc5db\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.270439 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbfccbea-2fcb-4c86-b24e-963961a2cd3a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rbdrt\" (UID: \"cbfccbea-2fcb-4c86-b24e-963961a2cd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rbdrt" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.270638 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2953a7b7-7e10-4037-94d5-08322d5b7521-csi-data-dir\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.270663 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3beac5a6-c506-4360-bab3-277801f0fb05-node-bootstrap-token\") pod \"machine-config-server-pz228\" (UID: \"3beac5a6-c506-4360-bab3-277801f0fb05\") " pod="openshift-machine-config-operator/machine-config-server-pz228" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.271024 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af8115d2-8e03-47f7-852c-4c7f08e1d089-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s9v7z\" (UID: \"af8115d2-8e03-47f7-852c-4c7f08e1d089\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s9v7z" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.274832 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6a027a-aad9-445b-8bab-ac0004a7c565-secret-volume\") pod \"collect-profiles-29522505-8tbt2\" (UID: \"4b6a027a-aad9-445b-8bab-ac0004a7c565\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.280545 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a9f3446-4189-40c0-a189-1ad04182560f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vdrlm\" (UID: \"4a9f3446-4189-40c0-a189-1ad04182560f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.280945 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/80bd8e1f-fde7-45e5-a482-8dd2d336bb3e-metrics-tls\") pod \"dns-default-vjddm\" (UID: \"80bd8e1f-fde7-45e5-a482-8dd2d336bb3e\") " pod="openshift-dns/dns-default-vjddm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.281257 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf8d275e-2c24-433b-84a2-0e415ab00c62-serving-cert\") pod \"service-ca-operator-777779d784-hqvn6\" (UID: \"bf8d275e-2c24-433b-84a2-0e415ab00c62\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqvn6" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.281554 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa215132-e3f4-460b-85d2-19d7dff33c1a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khf8m\" (UID: \"fa215132-e3f4-460b-85d2-19d7dff33c1a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khf8m" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.281696 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fb8e2703-d2bb-403c-9912-e7d9a1e2475d-profile-collector-cert\") pod \"catalog-operator-68c6474976-prsk2\" (UID: \"fb8e2703-d2bb-403c-9912-e7d9a1e2475d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.281875 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/69d9c3bb-1b14-45a5-8aa1-29c8b79fd809-signing-key\") pod \"service-ca-9c57cc56f-rptfq\" (UID: \"69d9c3bb-1b14-45a5-8aa1-29c8b79fd809\") " pod="openshift-service-ca/service-ca-9c57cc56f-rptfq" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.282017 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fb8e2703-d2bb-403c-9912-e7d9a1e2475d-srv-cert\") pod \"catalog-operator-68c6474976-prsk2\" (UID: \"fb8e2703-d2bb-403c-9912-e7d9a1e2475d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.282376 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/210b630c-d271-49ec-9381-795e5e92cf4e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lsp6v\" (UID: \"210b630c-d271-49ec-9381-795e5e92cf4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lsp6v" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.282427 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6a027a-aad9-445b-8bab-ac0004a7c565-config-volume\") pod \"collect-profiles-29522505-8tbt2\" (UID: \"4b6a027a-aad9-445b-8bab-ac0004a7c565\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.283650 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17edf562-a2eb-4263-bbce-c65013fd645f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rjbs8\" (UID: \"17edf562-a2eb-4263-bbce-c65013fd645f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.283683 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/67e5bb04-4920-4f51-a729-0af77b16490e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c462h\" (UID: \"67e5bb04-4920-4f51-a729-0af77b16490e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c462h" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.287844 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3beac5a6-c506-4360-bab3-277801f0fb05-certs\") pod \"machine-config-server-pz228\" (UID: \"3beac5a6-c506-4360-bab3-277801f0fb05\") " pod="openshift-machine-config-operator/machine-config-server-pz228" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.287907 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9dad1493-ca2b-4e2d-8c5f-8374b30deefa-proxy-tls\") pod \"machine-config-controller-84d6567774-x9rhw\" (UID: \"9dad1493-ca2b-4e2d-8c5f-8374b30deefa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.290619 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.300619 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5c3da943-badb-4d63-a3e2-541a9ecf9347-srv-cert\") pod \"olm-operator-6b444d44fb-fxsf8\" (UID: \"5c3da943-badb-4d63-a3e2-541a9ecf9347\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.303795 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj6gr\" (UniqueName: \"kubernetes.io/projected/4a9f3446-4189-40c0-a189-1ad04182560f-kube-api-access-lj6gr\") pod \"ingress-operator-5b745b69d9-vdrlm\" (UID: \"4a9f3446-4189-40c0-a189-1ad04182560f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.304543 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-bound-sa-token\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.306434 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.313179 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5c84c" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.323134 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgrtz\" (UniqueName: \"kubernetes.io/projected/aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab-kube-api-access-bgrtz\") pod \"router-default-5444994796-nkxs4\" (UID: \"aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab\") " pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.340730 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.341892 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f04bff26-dca4-4b98-82e8-582cf4305ee3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kpzts\" (UID: \"f04bff26-dca4-4b98-82e8-582cf4305ee3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kpzts" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.351354 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:29 crc kubenswrapper[4680]: E0217 17:46:29.351938 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:29.851916817 +0000 UTC m=+119.402815496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.362448 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwls7\" (UniqueName: \"kubernetes.io/projected/0b37c5d2-3899-45db-b4a3-976af83ba107-kube-api-access-cwls7\") pod \"packageserver-d55dfcdfc-4bdxb\" (UID: \"0b37c5d2-3899-45db-b4a3-976af83ba107\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.377548 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8rds\" (UniqueName: \"kubernetes.io/projected/ed1c57d6-3991-4ebb-9149-63e38e19dc40-kube-api-access-q8rds\") pod \"console-f9d7485db-t9gdn\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: W0217 17:46:29.404076 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa6ffa49_187a_4a3d_9ab1_e798d1e0dfab.slice/crio-4141d658c8d244cae43d12dfce7596a78d1bfeb88ce58ffc1e3048c6e3edf010 WatchSource:0}: Error finding container 4141d658c8d244cae43d12dfce7596a78d1bfeb88ce58ffc1e3048c6e3edf010: Status 404 returned error can't find the container with id 4141d658c8d244cae43d12dfce7596a78d1bfeb88ce58ffc1e3048c6e3edf010 Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.405713 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2pm9\" (UniqueName: \"kubernetes.io/projected/06368790-6302-406a-9f22-a1c07b6f91a2-kube-api-access-z2pm9\") pod \"cluster-image-registry-operator-dc59b4c8b-k7lq2\" (UID: \"06368790-6302-406a-9f22-a1c07b6f91a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.431800 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg4vt\" (UniqueName: \"kubernetes.io/projected/9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e-kube-api-access-kg4vt\") pod \"etcd-operator-b45778765-xpf6f\" (UID: \"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.440118 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06368790-6302-406a-9f22-a1c07b6f91a2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k7lq2\" (UID: \"06368790-6302-406a-9f22-a1c07b6f91a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.479951 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: E0217 17:46:29.480614 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:29.980602084 +0000 UTC m=+119.531500763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.504619 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fjjm\" (UniqueName: \"kubernetes.io/projected/4bd944e7-4291-4cac-925b-2115e5bdb0ef-kube-api-access-8fjjm\") pod \"openshift-controller-manager-operator-756b6f6bc6-fmz28\" (UID: \"4bd944e7-4291-4cac-925b-2115e5bdb0ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fmz28" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.514411 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsb82\" (UniqueName: \"kubernetes.io/projected/bf8d275e-2c24-433b-84a2-0e415ab00c62-kube-api-access-jsb82\") pod \"service-ca-operator-777779d784-hqvn6\" (UID: \"bf8d275e-2c24-433b-84a2-0e415ab00c62\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqvn6" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.536925 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k69bn\" (UniqueName: \"kubernetes.io/projected/05be763f-68c8-46fe-a8c5-f30ee5afc5db-kube-api-access-k69bn\") pod \"machine-config-operator-74547568cd-gt6kd\" (UID: \"05be763f-68c8-46fe-a8c5-f30ee5afc5db\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.543865 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh4rz\" (UniqueName: \"kubernetes.io/projected/4b6a027a-aad9-445b-8bab-ac0004a7c565-kube-api-access-hh4rz\") pod \"collect-profiles-29522505-8tbt2\" (UID: \"4b6a027a-aad9-445b-8bab-ac0004a7c565\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.559426 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af8115d2-8e03-47f7-852c-4c7f08e1d089-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s9v7z\" (UID: \"af8115d2-8e03-47f7-852c-4c7f08e1d089\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s9v7z" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.567408 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.576097 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.581442 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj77b\" (UniqueName: \"kubernetes.io/projected/2953a7b7-7e10-4037-94d5-08322d5b7521-kube-api-access-rj77b\") pod \"csi-hostpathplugin-74j8f\" (UID: \"2953a7b7-7e10-4037-94d5-08322d5b7521\") " pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.581447 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:29 crc kubenswrapper[4680]: E0217 17:46:29.581707 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:30.081684984 +0000 UTC m=+119.632583683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.598404 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kpzts" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.602065 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6cgl\" (UniqueName: \"kubernetes.io/projected/fb8e2703-d2bb-403c-9912-e7d9a1e2475d-kube-api-access-m6cgl\") pod \"catalog-operator-68c6474976-prsk2\" (UID: \"fb8e2703-d2bb-403c-9912-e7d9a1e2475d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.621292 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.621838 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa215132-e3f4-460b-85d2-19d7dff33c1a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-khf8m\" (UID: \"fa215132-e3f4-460b-85d2-19d7dff33c1a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khf8m" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.630499 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.648125 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s9v7z" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.657030 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.659707 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jc7m\" (UniqueName: \"kubernetes.io/projected/210b630c-d271-49ec-9381-795e5e92cf4e-kube-api-access-5jc7m\") pod \"package-server-manager-789f6589d5-lsp6v\" (UID: \"210b630c-d271-49ec-9381-795e5e92cf4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lsp6v" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.664906 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.672046 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx7tb\" (UniqueName: \"kubernetes.io/projected/67e5bb04-4920-4f51-a729-0af77b16490e-kube-api-access-gx7tb\") pod \"control-plane-machine-set-operator-78cbb6b69f-c462h\" (UID: \"67e5bb04-4920-4f51-a729-0af77b16490e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c462h" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.685627 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: E0217 17:46:29.685989 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:30.185976264 +0000 UTC m=+119.736874943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.686388 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqvn6" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.690517 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" event={"ID":"387f58c3-60a3-4f10-a29c-3bd2afca3e5d","Type":"ContainerStarted","Data":"6286c9d97b5405053eebd2539e39a4914e575bd009ccc1501d9a47eb55898c4a"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.691967 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" event={"ID":"13cf25e6-413f-472e-8178-439df59c10d7","Type":"ContainerStarted","Data":"703c0746645fec7e7f185b7c664dedf12b66a5d40eff7f0d3621c46bd0925b67"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.700114 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-m96mg" event={"ID":"0da17dc9-f5b1-4877-8242-28d767846f40","Type":"ContainerStarted","Data":"8dfb47378e5f2755f625f165738f5079edda3c346d1fdc3965d6d2a73533ccf3"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.701667 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5c84c"] Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.702254 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.703036 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brwmd\" (UniqueName: \"kubernetes.io/projected/a1816a7c-9c0b-49db-b812-dd9b2c605760-kube-api-access-brwmd\") pod \"migrator-59844c95c7-bzv2j\" (UID: \"a1816a7c-9c0b-49db-b812-dd9b2c605760\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv2j" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.707895 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg2wr\" (UniqueName: \"kubernetes.io/projected/3beac5a6-c506-4360-bab3-277801f0fb05-kube-api-access-pg2wr\") pod \"machine-config-server-pz228\" (UID: \"3beac5a6-c506-4360-bab3-277801f0fb05\") " pod="openshift-machine-config-operator/machine-config-server-pz228" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.709716 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khf8m" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.711670 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-z9fcm" event={"ID":"8e87e04c-2d08-4f13-8b5a-db27f849132e","Type":"ContainerStarted","Data":"d90bd984878544f4507264b7910a06ad771fbd515d68c760daaf1922e4d39ff3"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.711706 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-z9fcm" event={"ID":"8e87e04c-2d08-4f13-8b5a-db27f849132e","Type":"ContainerStarted","Data":"64b0d655c570b331907079aa1e974f9d99352f2d8374abaa2cbc8657b835bfa8"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.712225 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-z9fcm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.716963 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nkxs4" event={"ID":"aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab","Type":"ContainerStarted","Data":"4141d658c8d244cae43d12dfce7596a78d1bfeb88ce58ffc1e3048c6e3edf010"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.723660 4680 patch_prober.go:28] interesting pod/downloads-7954f5f757-z9fcm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.723702 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z9fcm" podUID="8e87e04c-2d08-4f13-8b5a-db27f849132e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.726769 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt6tb\" (UniqueName: \"kubernetes.io/projected/cbfccbea-2fcb-4c86-b24e-963961a2cd3a-kube-api-access-vt6tb\") pod \"kube-storage-version-migrator-operator-b67b599dd-rbdrt\" (UID: \"cbfccbea-2fcb-4c86-b24e-963961a2cd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rbdrt" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.732345 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" event={"ID":"2612e265-77ef-4e49-ba43-dddb142b0856","Type":"ContainerStarted","Data":"0db63d7266a0a0647ef126b3df2d9cc64aec7a6bb73753dcbc2ed0e8269bb455"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.732388 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" event={"ID":"2612e265-77ef-4e49-ba43-dddb142b0856","Type":"ContainerStarted","Data":"c8b972dc264962bb8a243088b2b373e38f95a54c90ae1c334dc191bc85198a00"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.735948 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lsp6v" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.738853 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwwlj\" (UniqueName: \"kubernetes.io/projected/feae09d1-ce01-4b1a-a198-f267e94d9190-kube-api-access-xwwlj\") pod \"ingress-canary-m5zvm\" (UID: \"feae09d1-ce01-4b1a-a198-f267e94d9190\") " pod="openshift-ingress-canary/ingress-canary-m5zvm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.740375 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" event={"ID":"d7dac5a2-b81a-483f-9686-a38012959621","Type":"ContainerStarted","Data":"04d51add39e9782c374dc5d17559a10eb24130c02e018dfaa53b370a8f0462f2"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.740409 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" event={"ID":"d7dac5a2-b81a-483f-9686-a38012959621","Type":"ContainerStarted","Data":"862b007f3f7167a99451eb5afac92f0a80e45465616f161ae162e502db97960e"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.744702 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" event={"ID":"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa","Type":"ContainerStarted","Data":"070ef90fbf6014111161b992bc77e566c8a69a06f8055bee3364af8afb55f2c0"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.744766 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" event={"ID":"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa","Type":"ContainerStarted","Data":"5907717f6a04064101d401212c74cd791a881895237373d12f20cc585f0e1ba4"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.745661 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.745974 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c462h" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.748643 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2l4fg" event={"ID":"8c547720-450f-4714-827e-7694bd2b89f7","Type":"ContainerStarted","Data":"56c81ddce00b48634892b06adea9ae631d3a4157815bfa05d0ba6a2eb6fca7d7"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.748670 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2l4fg" event={"ID":"8c547720-450f-4714-827e-7694bd2b89f7","Type":"ContainerStarted","Data":"ccb9973245262522459c7e9b8e7b7169dfb773119445beb7b8ea9b00439bbaf1"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.749209 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-2l4fg" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.760168 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm"] Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.770215 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rbdrt" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.770588 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fmz28" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.771042 4680 patch_prober.go:28] interesting pod/console-operator-58897d9998-2l4fg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.771071 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2l4fg" podUID="8c547720-450f-4714-827e-7694bd2b89f7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.771134 4680 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5zzk5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.771147 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" podUID="8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.771799 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmpm4\" (UniqueName: \"kubernetes.io/projected/17edf562-a2eb-4263-bbce-c65013fd645f-kube-api-access-wmpm4\") pod \"marketplace-operator-79b997595-rjbs8\" (UID: \"17edf562-a2eb-4263-bbce-c65013fd645f\") " pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.785580 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pz228" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.786258 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:29 crc kubenswrapper[4680]: E0217 17:46:29.790811 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:30.29078434 +0000 UTC m=+119.841683019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.794873 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m5zvm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.799634 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" event={"ID":"3abf57a9-2bd6-4f7f-ba75-5fe928201df8","Type":"ContainerStarted","Data":"126d1352a0a456f33306babb02fd1b2531d852601822b22bb44b85e1dd237005"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.803046 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9fns\" (UniqueName: \"kubernetes.io/projected/69d9c3bb-1b14-45a5-8aa1-29c8b79fd809-kube-api-access-h9fns\") pod \"service-ca-9c57cc56f-rptfq\" (UID: \"69d9c3bb-1b14-45a5-8aa1-29c8b79fd809\") " pod="openshift-service-ca/service-ca-9c57cc56f-rptfq" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.805070 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p59pb\" (UniqueName: \"kubernetes.io/projected/789c22f5-7717-4e0f-9df4-6bc17ed6a932-kube-api-access-p59pb\") pod \"multus-admission-controller-857f4d67dd-gd8pp\" (UID: \"789c22f5-7717-4e0f-9df4-6bc17ed6a932\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gd8pp" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.823064 4680 generic.go:334] "Generic (PLEG): container finished" podID="ad93e803-b1ee-4401-a6c4-0f510b6b8735" containerID="37603939bdf64592c1f2e34c8819ae8cb51d0cbb88114eb8b9ea5dd2282ac464" exitCode=0 Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.823199 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-74j8f" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.824226 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" event={"ID":"ad93e803-b1ee-4401-a6c4-0f510b6b8735","Type":"ContainerDied","Data":"37603939bdf64592c1f2e34c8819ae8cb51d0cbb88114eb8b9ea5dd2282ac464"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.828149 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" event={"ID":"ad93e803-b1ee-4401-a6c4-0f510b6b8735","Type":"ContainerStarted","Data":"7825e41c374a618d1d939d8bae3586e8a2afb82acd4a598e7877ba079dd60ee4"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.837558 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnvdh\" (UniqueName: \"kubernetes.io/projected/5c3da943-badb-4d63-a3e2-541a9ecf9347-kube-api-access-hnvdh\") pod \"olm-operator-6b444d44fb-fxsf8\" (UID: \"5c3da943-badb-4d63-a3e2-541a9ecf9347\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.864420 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" event={"ID":"b1c48715-1326-41fd-8b2e-007578010c80","Type":"ContainerStarted","Data":"c1ba09e1cfac89999cbbf7d1fe8539bf06d3bcbaa3e369ff8bd5332863f35f43"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.864546 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" event={"ID":"b1c48715-1326-41fd-8b2e-007578010c80","Type":"ContainerStarted","Data":"fa34fb27a3599155d7cd9bd4be4ddcfea1efd1f867d401958039d611a8125338"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.864600 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.867953 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n68nh\" (UniqueName: \"kubernetes.io/projected/80bd8e1f-fde7-45e5-a482-8dd2d336bb3e-kube-api-access-n68nh\") pod \"dns-default-vjddm\" (UID: \"80bd8e1f-fde7-45e5-a482-8dd2d336bb3e\") " pod="openshift-dns/dns-default-vjddm" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.868483 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnpfh\" (UniqueName: \"kubernetes.io/projected/9dad1493-ca2b-4e2d-8c5f-8374b30deefa-kube-api-access-hnpfh\") pod \"machine-config-controller-84d6567774-x9rhw\" (UID: \"9dad1493-ca2b-4e2d-8c5f-8374b30deefa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.880022 4680 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-qxkds container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.880082 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" podUID="b1c48715-1326-41fd-8b2e-007578010c80" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.889993 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: E0217 17:46:29.890309 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:30.390293411 +0000 UTC m=+119.941192090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.891614 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbrt9" event={"ID":"ae5ad2a7-30fa-4e6c-9cb7-273f3d260318","Type":"ContainerStarted","Data":"91cff2f9aa4f694e693ea37504164c3884252fb93f8c2463e2c04a23b3bfa46c"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.891644 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbrt9" event={"ID":"ae5ad2a7-30fa-4e6c-9cb7-273f3d260318","Type":"ContainerStarted","Data":"2e9b9f3453a21d510b06916e9c55889763c30633bf4833d2aade5ba1a44ade99"} Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.945560 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk"] Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.972394 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv2j" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.978801 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.990892 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:29 crc kubenswrapper[4680]: E0217 17:46:29.991006 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:30.490988749 +0000 UTC m=+120.041887418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.991264 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:29 crc kubenswrapper[4680]: E0217 17:46:29.995193 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:30.495173629 +0000 UTC m=+120.046072378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:29 crc kubenswrapper[4680]: I0217 17:46:29.995576 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-rptfq" Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.020189 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw" Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.026373 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gd8pp" Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.054023 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.076173 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vjddm" Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.093624 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:30 crc kubenswrapper[4680]: E0217 17:46:30.094057 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:30.59403597 +0000 UTC m=+120.144934649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.162600 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kpzts"] Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.196126 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:30 crc kubenswrapper[4680]: E0217 17:46:30.196645 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:30.696631367 +0000 UTC m=+120.247530046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.281928 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" podStartSLOduration=99.281908906 podStartE2EDuration="1m39.281908906s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:30.26144485 +0000 UTC m=+119.812343529" watchObservedRunningTime="2026-02-17 17:46:30.281908906 +0000 UTC m=+119.832807575" Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.300460 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:30 crc kubenswrapper[4680]: E0217 17:46:30.300735 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:30.80070329 +0000 UTC m=+120.351601969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.300865 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:30 crc kubenswrapper[4680]: E0217 17:46:30.301235 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:30.801219156 +0000 UTC m=+120.352117885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.356529 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-t9gdn"] Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.406568 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:30 crc kubenswrapper[4680]: E0217 17:46:30.406908 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:30.906891758 +0000 UTC m=+120.457790437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.408104 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-xmlr4" podStartSLOduration=99.408092996 podStartE2EDuration="1m39.408092996s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:30.407008042 +0000 UTC m=+119.957906721" watchObservedRunningTime="2026-02-17 17:46:30.408092996 +0000 UTC m=+119.958991675" Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.440653 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2"] Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.516773 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:30 crc kubenswrapper[4680]: E0217 17:46:30.517437 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:31.017422592 +0000 UTC m=+120.568321271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.551375 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xpf6f"] Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.607221 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lsp6v"] Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.607274 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2"] Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.618117 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:30 crc kubenswrapper[4680]: E0217 17:46:30.618509 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:31.118490541 +0000 UTC m=+120.669389230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.670675 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb"] Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.724608 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:30 crc kubenswrapper[4680]: E0217 17:46:30.725149 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:31.225117483 +0000 UTC m=+120.776016162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.728165 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s9v7z"] Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.754192 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hqvn6"] Feb 17 17:46:30 crc kubenswrapper[4680]: W0217 17:46:30.763661 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b6c5aae_bd5b_47b7_ada0_ff20fb2b9e9e.slice/crio-49743fafa451637cca36b94159f8c0af45590b0967515afced64e89915906bc5 WatchSource:0}: Error finding container 49743fafa451637cca36b94159f8c0af45590b0967515afced64e89915906bc5: Status 404 returned error can't find the container with id 49743fafa451637cca36b94159f8c0af45590b0967515afced64e89915906bc5 Feb 17 17:46:30 crc kubenswrapper[4680]: W0217 17:46:30.789395 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod210b630c_d271_49ec_9381_795e5e92cf4e.slice/crio-e78be20e185f283455a45c2261e2335317e38dd783b648ad129fe3db9da8dc0b WatchSource:0}: Error finding container e78be20e185f283455a45c2261e2335317e38dd783b648ad129fe3db9da8dc0b: Status 404 returned error can't find the container with id e78be20e185f283455a45c2261e2335317e38dd783b648ad129fe3db9da8dc0b Feb 17 17:46:30 crc kubenswrapper[4680]: W0217 17:46:30.790907 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b6a027a_aad9_445b_8bab_ac0004a7c565.slice/crio-adf5d571e0c6669a95cabada96254a78386b49409b5b79e5b28c183888d3fbc6 WatchSource:0}: Error finding container adf5d571e0c6669a95cabada96254a78386b49409b5b79e5b28c183888d3fbc6: Status 404 returned error can't find the container with id adf5d571e0c6669a95cabada96254a78386b49409b5b79e5b28c183888d3fbc6 Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.826871 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:30 crc kubenswrapper[4680]: E0217 17:46:30.826952 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:31.326940186 +0000 UTC m=+120.877838865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.827185 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:30 crc kubenswrapper[4680]: E0217 17:46:30.827458 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:31.327450202 +0000 UTC m=+120.878348881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.896166 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" podStartSLOduration=99.896145066 podStartE2EDuration="1m39.896145066s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:30.882881914 +0000 UTC m=+120.433780603" watchObservedRunningTime="2026-02-17 17:46:30.896145066 +0000 UTC m=+120.447043755" Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.906251 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lsp6v" event={"ID":"210b630c-d271-49ec-9381-795e5e92cf4e","Type":"ContainerStarted","Data":"e78be20e185f283455a45c2261e2335317e38dd783b648ad129fe3db9da8dc0b"} Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.910413 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" event={"ID":"2612e265-77ef-4e49-ba43-dddb142b0856","Type":"ContainerStarted","Data":"a1eab4d6e941358c92f645b30ebb5ab39cce37ecad82be8d32cfb4f7c24648fd"} Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.918408 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" event={"ID":"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e","Type":"ContainerStarted","Data":"49743fafa451637cca36b94159f8c0af45590b0967515afced64e89915906bc5"} Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.920998 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" event={"ID":"4b6a027a-aad9-445b-8bab-ac0004a7c565","Type":"ContainerStarted","Data":"adf5d571e0c6669a95cabada96254a78386b49409b5b79e5b28c183888d3fbc6"} Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.921968 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2" event={"ID":"06368790-6302-406a-9f22-a1c07b6f91a2","Type":"ContainerStarted","Data":"e7ccda6f97367d9588a7ec33929928196d5e0bd8326dce37dfab46c093cd3411"} Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.925299 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t9gdn" event={"ID":"ed1c57d6-3991-4ebb-9149-63e38e19dc40","Type":"ContainerStarted","Data":"1e73b22bbdc2da4dee523d1a74095171d078fc7e3bb3ae122f38bfd1a4e4ab90"} Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.928505 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:30 crc kubenswrapper[4680]: E0217 17:46:30.928804 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:31.42877522 +0000 UTC m=+120.979673909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.928973 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:30 crc kubenswrapper[4680]: E0217 17:46:30.929467 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:31.42945102 +0000 UTC m=+120.980349779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.936767 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nkxs4" event={"ID":"aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab","Type":"ContainerStarted","Data":"a109e72614eee0af4c18df2cc63f56f776c8556409997ff506cf5f6e70a97a37"} Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.938331 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" event={"ID":"14cc3153-fc79-4a60-b371-eae93f02ba78","Type":"ContainerStarted","Data":"f99bc5bbb47c83c95112ef888c2608078335cb30dc07e79565f7b00330018946"} Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.941391 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-m96mg" event={"ID":"0da17dc9-f5b1-4877-8242-28d767846f40","Type":"ContainerStarted","Data":"b40e1d3704401512536ea832f36abb8044b662f855b81b89821ea35382645374"} Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.942760 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" event={"ID":"3abf57a9-2bd6-4f7f-ba75-5fe928201df8","Type":"ContainerStarted","Data":"e636eb0bf8ce8f8ea6ba12f9dfcdf653042e7bc16dbfa4bbb725855ce76e49da"} Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.943596 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kpzts" event={"ID":"f04bff26-dca4-4b98-82e8-582cf4305ee3","Type":"ContainerStarted","Data":"940adf55bf392a562027cc022f2f7284f2917e06446b0ab9ec9f16b90f969410"} Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.944714 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5c84c" event={"ID":"da6d5deb-7032-441b-b78a-efbecc6c25fd","Type":"ContainerStarted","Data":"cd8d8c9a8240a3dafd34f3a6667140176bfcb76397d8dc70c3fac6b555e96328"} Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.945527 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" event={"ID":"4a9f3446-4189-40c0-a189-1ad04182560f","Type":"ContainerStarted","Data":"60d52b43c7498f9ac7f4d2784f8f6bc77630bef4d9c8af38b05b872b8806b394"} Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.964826 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pz228" event={"ID":"3beac5a6-c506-4360-bab3-277801f0fb05","Type":"ContainerStarted","Data":"69f6a944c19bb7b7c2ff80c30692523421c015ffc5431162d069ef45eacd9db4"} Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.977759 4680 generic.go:334] "Generic (PLEG): container finished" podID="387f58c3-60a3-4f10-a29c-3bd2afca3e5d" containerID="8fead53351abfcd3e97c20796d1d1112cd0600bc5a345573e724c75f8625a6ed" exitCode=0 Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.977825 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" event={"ID":"387f58c3-60a3-4f10-a29c-3bd2afca3e5d","Type":"ContainerDied","Data":"8fead53351abfcd3e97c20796d1d1112cd0600bc5a345573e724c75f8625a6ed"} Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.991341 4680 generic.go:334] "Generic (PLEG): container finished" podID="13cf25e6-413f-472e-8178-439df59c10d7" containerID="4cee3662ea95f975143e731aef043d1f37dee0a141b148a55ef9cd30d02e04d2" exitCode=0 Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.992474 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" event={"ID":"13cf25e6-413f-472e-8178-439df59c10d7","Type":"ContainerDied","Data":"4cee3662ea95f975143e731aef043d1f37dee0a141b148a55ef9cd30d02e04d2"} Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.997932 4680 patch_prober.go:28] interesting pod/downloads-7954f5f757-z9fcm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.997981 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z9fcm" podUID="8e87e04c-2d08-4f13-8b5a-db27f849132e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.998395 4680 patch_prober.go:28] interesting pod/console-operator-58897d9998-2l4fg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 17 17:46:30 crc kubenswrapper[4680]: I0217 17:46:30.998425 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2l4fg" podUID="8c547720-450f-4714-827e-7694bd2b89f7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.003626 4680 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5zzk5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.003682 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" podUID="8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.029785 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:31 crc kubenswrapper[4680]: E0217 17:46:31.035049 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:31.535027611 +0000 UTC m=+121.085926330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.095755 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-2l4fg" podStartSLOduration=100.095736176 podStartE2EDuration="1m40.095736176s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:31.060674086 +0000 UTC m=+120.611572785" watchObservedRunningTime="2026-02-17 17:46:31.095736176 +0000 UTC m=+120.646634855" Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.132109 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:31 crc kubenswrapper[4680]: E0217 17:46:31.132530 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:31.632517509 +0000 UTC m=+121.183416198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.196450 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jbrt9" podStartSLOduration=100.196429344 podStartE2EDuration="1m40.196429344s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:31.159836937 +0000 UTC m=+120.710735616" watchObservedRunningTime="2026-02-17 17:46:31.196429344 +0000 UTC m=+120.747328023" Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.235217 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:31 crc kubenswrapper[4680]: E0217 17:46:31.235856 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:31.735835898 +0000 UTC m=+121.286734587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.339084 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:31 crc kubenswrapper[4680]: E0217 17:46:31.340055 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:31.840042285 +0000 UTC m=+121.390940964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.349448 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.391953 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-z9fcm" podStartSLOduration=100.391939457 podStartE2EDuration="1m40.391939457s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:31.363532774 +0000 UTC m=+120.914431463" watchObservedRunningTime="2026-02-17 17:46:31.391939457 +0000 UTC m=+120.942838136" Feb 17 17:46:31 crc kubenswrapper[4680]: E0217 17:46:31.441328 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:31.941296641 +0000 UTC m=+121.492195320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.441360 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.441532 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:31 crc kubenswrapper[4680]: E0217 17:46:31.441888 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:31.941879968 +0000 UTC m=+121.492778647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.542577 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:31 crc kubenswrapper[4680]: E0217 17:46:31.542740 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:32.042714261 +0000 UTC m=+121.593612950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.543182 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:31 crc kubenswrapper[4680]: E0217 17:46:31.543603 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:32.043591668 +0000 UTC m=+121.594490347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.607132 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ffq2r" podStartSLOduration=101.607114691 podStartE2EDuration="1m41.607114691s" podCreationTimestamp="2026-02-17 17:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:31.599775033 +0000 UTC m=+121.150673712" watchObservedRunningTime="2026-02-17 17:46:31.607114691 +0000 UTC m=+121.158013370" Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.607300 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-nkxs4" podStartSLOduration=100.607295717 podStartE2EDuration="1m40.607295717s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:31.560570446 +0000 UTC m=+121.111469125" watchObservedRunningTime="2026-02-17 17:46:31.607295717 +0000 UTC m=+121.158194396" Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.645623 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:31 crc kubenswrapper[4680]: E0217 17:46:31.646037 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:32.146017449 +0000 UTC m=+121.696916128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.748397 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:31 crc kubenswrapper[4680]: E0217 17:46:31.748692 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:32.248680449 +0000 UTC m=+121.799579128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.766046 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:31 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:31 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:31 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.766091 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.854299 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:31 crc kubenswrapper[4680]: E0217 17:46:31.865385 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:32.365352513 +0000 UTC m=+121.916251202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.961497 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:31 crc kubenswrapper[4680]: E0217 17:46:31.961838 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:32.46182643 +0000 UTC m=+122.012725099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.997875 4680 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-qxkds container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 17:46:31 crc kubenswrapper[4680]: I0217 17:46:31.997933 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" podUID="b1c48715-1326-41fd-8b2e-007578010c80" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.033807 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" event={"ID":"0b37c5d2-3899-45db-b4a3-976af83ba107","Type":"ContainerStarted","Data":"24edacb650b049e07d1dc3c5e833d9fd7d0721cd1509b18347f7152ace01be21"} Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.062903 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:32 crc kubenswrapper[4680]: E0217 17:46:32.063490 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:32.563471037 +0000 UTC m=+122.114369716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.065169 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2" event={"ID":"06368790-6302-406a-9f22-a1c07b6f91a2","Type":"ContainerStarted","Data":"b63cab2ab751c4e155b5e7a033a58350e7f94f0986c9ff17896327f47504e92b"} Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.072273 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" event={"ID":"14cc3153-fc79-4a60-b371-eae93f02ba78","Type":"ContainerStarted","Data":"649b37816d626159c03aae5f0e7576238dda5dc1000f794ab365b5df323cc628"} Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.073289 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.081350 4680 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kb4mk container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.081424 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" podUID="14cc3153-fc79-4a60-b371-eae93f02ba78" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.094283 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" event={"ID":"3abf57a9-2bd6-4f7f-ba75-5fe928201df8","Type":"ContainerStarted","Data":"1b8930107bf4f83764cc7ceb4dfa0eaf2cee6963e0c3e9fedf2007ebae1cc781"} Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.119749 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5c84c" event={"ID":"da6d5deb-7032-441b-b78a-efbecc6c25fd","Type":"ContainerStarted","Data":"d276a0c6ceb44568f8f853df1d0bad501202a1c9ce5417ddcda8b4ae9a9ea4cb"} Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.139154 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s9v7z" event={"ID":"af8115d2-8e03-47f7-852c-4c7f08e1d089","Type":"ContainerStarted","Data":"8b33c69dacdc7c7760ca98265be2bdcb2e4d267c8c4fc74d62fea4122b97f917"} Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.149120 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pz228" event={"ID":"3beac5a6-c506-4360-bab3-277801f0fb05","Type":"ContainerStarted","Data":"43421f2c23cf36483befd468ded994369b979f7f18bdcefa904973337a117289"} Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.161196 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k7lq2" podStartSLOduration=101.161182182 podStartE2EDuration="1m41.161182182s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:32.102457548 +0000 UTC m=+121.653356227" watchObservedRunningTime="2026-02-17 17:46:32.161182182 +0000 UTC m=+121.712080861" Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.161920 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" podStartSLOduration=101.161914315 podStartE2EDuration="1m41.161914315s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:32.161852513 +0000 UTC m=+121.712751192" watchObservedRunningTime="2026-02-17 17:46:32.161914315 +0000 UTC m=+121.712812984" Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.166070 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:32 crc kubenswrapper[4680]: E0217 17:46:32.166635 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:32.666620321 +0000 UTC m=+122.217519000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.172084 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqvn6" event={"ID":"bf8d275e-2c24-433b-84a2-0e415ab00c62","Type":"ContainerStarted","Data":"ae76790c645489fc3d585cab2d48118d042a394e4aecd4fa97829b8d71c30142"} Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.196331 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-vhvpq" podStartSLOduration=101.196288633 podStartE2EDuration="1m41.196288633s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:32.186010114 +0000 UTC m=+121.736908793" watchObservedRunningTime="2026-02-17 17:46:32.196288633 +0000 UTC m=+121.747187312" Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.216657 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-m96mg" event={"ID":"0da17dc9-f5b1-4877-8242-28d767846f40","Type":"ContainerStarted","Data":"6b12a7ca8eec4f91b6f4b4fbcf87ba8ab3367d9b3af9263ff4383fbf878aaa5b"} Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.241922 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-pz228" podStartSLOduration=6.24188105 podStartE2EDuration="6.24188105s" podCreationTimestamp="2026-02-17 17:46:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:32.228941277 +0000 UTC m=+121.779839956" watchObservedRunningTime="2026-02-17 17:46:32.24188105 +0000 UTC m=+121.792779729" Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.242638 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" event={"ID":"4a9f3446-4189-40c0-a189-1ad04182560f","Type":"ContainerStarted","Data":"ae6131ba89423e56bcd5dbfc99bcf33c5f7e8fdb74cff33f72c6908a29d150e7"} Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.267037 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:32 crc kubenswrapper[4680]: E0217 17:46:32.269643 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:32.769625072 +0000 UTC m=+122.320523741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.271752 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" event={"ID":"ad93e803-b1ee-4401-a6c4-0f510b6b8735","Type":"ContainerStarted","Data":"914eeb9148ae8264de86cf9bac7bb89932bca5766f9bbbb9fd48479647127fab"} Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.277100 4680 patch_prober.go:28] interesting pod/downloads-7954f5f757-z9fcm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.277156 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z9fcm" podUID="8e87e04c-2d08-4f13-8b5a-db27f849132e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.277224 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.281105 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-m96mg" podStartSLOduration=101.281083058 podStartE2EDuration="1m41.281083058s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:32.268643131 +0000 UTC m=+121.819541810" watchObservedRunningTime="2026-02-17 17:46:32.281083058 +0000 UTC m=+121.831981737" Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.304097 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" podStartSLOduration=101.304082401 podStartE2EDuration="1m41.304082401s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:32.303820883 +0000 UTC m=+121.854719562" watchObservedRunningTime="2026-02-17 17:46:32.304082401 +0000 UTC m=+121.854981080" Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.316884 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-2l4fg" Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.349744 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:32 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:32 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:32 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.349801 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.368644 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.369196 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8"] Feb 17 17:46:32 crc kubenswrapper[4680]: E0217 17:46:32.370339 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:32.870307889 +0000 UTC m=+122.421206638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.472793 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:32 crc kubenswrapper[4680]: W0217 17:46:32.473521 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c3da943_badb_4d63_a3e2_541a9ecf9347.slice/crio-0b4072e1f0006f7f3a90c54cc85ed84ba9cd0dc88d717e267e9374a2e96f3d8d WatchSource:0}: Error finding container 0b4072e1f0006f7f3a90c54cc85ed84ba9cd0dc88d717e267e9374a2e96f3d8d: Status 404 returned error can't find the container with id 0b4072e1f0006f7f3a90c54cc85ed84ba9cd0dc88d717e267e9374a2e96f3d8d Feb 17 17:46:32 crc kubenswrapper[4680]: E0217 17:46:32.475944 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:32.97591772 +0000 UTC m=+122.526816399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.512743 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fmz28"] Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.548502 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rjbs8"] Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.577099 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:32 crc kubenswrapper[4680]: E0217 17:46:32.577469 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:33.077453974 +0000 UTC m=+122.628352653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.601178 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-74j8f"] Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.615730 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2"] Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.660820 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c462h"] Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.686311 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:32 crc kubenswrapper[4680]: E0217 17:46:32.686427 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:33.186404228 +0000 UTC m=+122.737302907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.686572 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:32 crc kubenswrapper[4680]: E0217 17:46:32.686836 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:33.186825192 +0000 UTC m=+122.737723871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.689125 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-rptfq"] Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.710125 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd"] Feb 17 17:46:32 crc kubenswrapper[4680]: W0217 17:46:32.753221 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2953a7b7_7e10_4037_94d5_08322d5b7521.slice/crio-8872ab94ccd622bd1e0cfe5ea10fbe3ef7803d8b710e00c9c1edab03cbd5876e WatchSource:0}: Error finding container 8872ab94ccd622bd1e0cfe5ea10fbe3ef7803d8b710e00c9c1edab03cbd5876e: Status 404 returned error can't find the container with id 8872ab94ccd622bd1e0cfe5ea10fbe3ef7803d8b710e00c9c1edab03cbd5876e Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.768656 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m5zvm"] Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.780138 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vjddm"] Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.783474 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gd8pp"] Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.787580 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:32 crc kubenswrapper[4680]: E0217 17:46:32.787879 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:33.28786504 +0000 UTC m=+122.838763719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.788373 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rbdrt"] Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.829199 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khf8m"] Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.832868 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw"] Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.892332 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:32 crc kubenswrapper[4680]: E0217 17:46:32.892961 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:33.392949614 +0000 UTC m=+122.943848293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.906204 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv2j"] Feb 17 17:46:32 crc kubenswrapper[4680]: W0217 17:46:32.948621 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1816a7c_9c0b_49db_b812_dd9b2c605760.slice/crio-8b0acee9714ceca037074746d4686a7627dd3ab158a1df3df88b2be19710fe89 WatchSource:0}: Error finding container 8b0acee9714ceca037074746d4686a7627dd3ab158a1df3df88b2be19710fe89: Status 404 returned error can't find the container with id 8b0acee9714ceca037074746d4686a7627dd3ab158a1df3df88b2be19710fe89 Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.993831 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:32 crc kubenswrapper[4680]: E0217 17:46:32.994067 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:33.494020434 +0000 UTC m=+123.044919113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:32 crc kubenswrapper[4680]: I0217 17:46:32.994175 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:32 crc kubenswrapper[4680]: E0217 17:46:32.994585 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:33.494567781 +0000 UTC m=+123.045466460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.095132 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:33 crc kubenswrapper[4680]: E0217 17:46:33.095439 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:33.595413724 +0000 UTC m=+123.146312403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.095521 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:33 crc kubenswrapper[4680]: E0217 17:46:33.095974 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:33.59595575 +0000 UTC m=+123.146854429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.196883 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:33 crc kubenswrapper[4680]: E0217 17:46:33.197308 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:33.697289658 +0000 UTC m=+123.248188337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.298063 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:33 crc kubenswrapper[4680]: E0217 17:46:33.298562 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:33.798545303 +0000 UTC m=+123.349443982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.311443 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-rptfq" event={"ID":"69d9c3bb-1b14-45a5-8aa1-29c8b79fd809","Type":"ContainerStarted","Data":"7fcc6f45c1d96f3672847b98f06869ef8f2f151f3e625c06cae0c7c65b98e6ce"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.322740 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" event={"ID":"13cf25e6-413f-472e-8178-439df59c10d7","Type":"ContainerStarted","Data":"037d4fe18a08aae59af2c3e6a78d3f0f5b837bb17de5912a64420dad025f269f"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.322789 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" event={"ID":"13cf25e6-413f-472e-8178-439df59c10d7","Type":"ContainerStarted","Data":"9dae81a8928760a6934e2401106bb6b09bee815f8f54e28cc2ebacc2c2226abc"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.328738 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" event={"ID":"4b6a027a-aad9-445b-8bab-ac0004a7c565","Type":"ContainerStarted","Data":"e7dfc9b6c735a791e5e2cc87e2682d01f1809c7bdfbf11c2c9ddfb7fa72e8119"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.347289 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:33 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:33 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:33 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.347341 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.373867 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fmz28" event={"ID":"4bd944e7-4291-4cac-925b-2115e5bdb0ef","Type":"ContainerStarted","Data":"d0c0fd4ab3a6d9b769f4fbb9577fe52a49b18755a31ba143c85bf9854a32ee58"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.373910 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fmz28" event={"ID":"4bd944e7-4291-4cac-925b-2115e5bdb0ef","Type":"ContainerStarted","Data":"09bb2e8b72cb499adb01f93973e4d1580a9765ef511ced06a197c1cf837b7d94"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.399806 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" event={"ID":"17edf562-a2eb-4263-bbce-c65013fd645f","Type":"ContainerStarted","Data":"2c5b8edbc9581855f5f2f47cd62168294d0896b869a323bbf844265467f2b9ee"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.399851 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" event={"ID":"17edf562-a2eb-4263-bbce-c65013fd645f","Type":"ContainerStarted","Data":"6a5f4b31efe4ec4712d358fdf1423a8e9782edb6ce9111e8085841f5ece4d64b"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.400809 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.406995 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:33 crc kubenswrapper[4680]: E0217 17:46:33.407920 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:33.907906791 +0000 UTC m=+123.458805470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.421350 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m5zvm" event={"ID":"feae09d1-ce01-4b1a-a198-f267e94d9190","Type":"ContainerStarted","Data":"de24cacad37cd91054c1fcb0c7095d8bce40ba3c83eb7d53958134cc7289c2b8"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.437660 4680 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rjbs8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.437718 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" podUID="17edf562-a2eb-4263-bbce-c65013fd645f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.480567 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqvn6" event={"ID":"bf8d275e-2c24-433b-84a2-0e415ab00c62","Type":"ContainerStarted","Data":"c057e8631a28d04c3c0a3ef03344290150ef59812cf609536054c764db27fbe3"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.499671 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" event={"ID":"9b6c5aae-bd5b-47b7-ada0-ff20fb2b9e9e","Type":"ContainerStarted","Data":"e23bb45cb61e31b01f97637c25ed7eb0ca6d6c8b1d1e30d39707161a2f3b5e0c"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.508299 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:33 crc kubenswrapper[4680]: E0217 17:46:33.509535 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:34.009519127 +0000 UTC m=+123.560417806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.520441 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s9v7z" event={"ID":"af8115d2-8e03-47f7-852c-4c7f08e1d089","Type":"ContainerStarted","Data":"c5b368dab6f8430da1ba18e3d91bba553ddac44e2313976bf9c688d20d0ae998"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.532046 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" event={"ID":"4a9f3446-4189-40c0-a189-1ad04182560f","Type":"ContainerStarted","Data":"412fff7f54aec756408e32f6c62a14f05150282d549aa0de75beb0bc493f94ab"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.533959 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" event={"ID":"5c3da943-badb-4d63-a3e2-541a9ecf9347","Type":"ContainerStarted","Data":"015ce7ef8c584dcc2d547058f98450a03a45c6b460c75a6812b7cb4c6b6cc462"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.533992 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" event={"ID":"5c3da943-badb-4d63-a3e2-541a9ecf9347","Type":"ContainerStarted","Data":"0b4072e1f0006f7f3a90c54cc85ed84ba9cd0dc88d717e267e9374a2e96f3d8d"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.534729 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.541618 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5c84c" event={"ID":"da6d5deb-7032-441b-b78a-efbecc6c25fd","Type":"ContainerStarted","Data":"144a9590203df9cee3a344401689bbbe00bb583b1863c222360486085f57156b"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.541981 4680 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-fxsf8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.542033 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" podUID="5c3da943-badb-4d63-a3e2-541a9ecf9347" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.549109 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" podStartSLOduration=102.549093847 podStartE2EDuration="1m42.549093847s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:33.511449807 +0000 UTC m=+123.062348496" watchObservedRunningTime="2026-02-17 17:46:33.549093847 +0000 UTC m=+123.099992526" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.553403 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" event={"ID":"05be763f-68c8-46fe-a8c5-f30ee5afc5db","Type":"ContainerStarted","Data":"969f878edfe3a5f6f90a4085d10b03a739727d572a217ff085239d284559a4b7"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.554553 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" event={"ID":"0b37c5d2-3899-45db-b4a3-976af83ba107","Type":"ContainerStarted","Data":"b13f10b58ba763794b661e164a1bfadcad8b36f45c616bfc19cd3d2add675a46"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.557998 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.566471 4680 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-4bdxb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.566532 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" podUID="0b37c5d2-3899-45db-b4a3-976af83ba107" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.586163 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c462h" event={"ID":"67e5bb04-4920-4f51-a729-0af77b16490e","Type":"ContainerStarted","Data":"01ce572f1d8002f9ecd5de72513ca99cc2ed681ae3df0798092e09b73f3b8d19"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.615182 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.616411 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fmz28" podStartSLOduration=102.616393277 podStartE2EDuration="1m42.616393277s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:33.614852079 +0000 UTC m=+123.165750758" watchObservedRunningTime="2026-02-17 17:46:33.616393277 +0000 UTC m=+123.167291956" Feb 17 17:46:33 crc kubenswrapper[4680]: E0217 17:46:33.616730 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:34.116707276 +0000 UTC m=+123.667605995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.617032 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqvn6" podStartSLOduration=102.617024706 podStartE2EDuration="1m42.617024706s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:33.56692754 +0000 UTC m=+123.117826219" watchObservedRunningTime="2026-02-17 17:46:33.617024706 +0000 UTC m=+123.167923385" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.673488 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.673539 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vjddm" event={"ID":"80bd8e1f-fde7-45e5-a482-8dd2d336bb3e","Type":"ContainerStarted","Data":"7b1b758bd56a78cca459831e18edbaf402da8a494c042db9add68a08fea509c5"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.673667 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.676494 4680 patch_prober.go:28] interesting pod/apiserver-76f77b778f-dp7b9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.676536 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" podUID="13cf25e6-413f-472e-8178-439df59c10d7" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.680612 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gd8pp" event={"ID":"789c22f5-7717-4e0f-9df4-6bc17ed6a932","Type":"ContainerStarted","Data":"6067cc45c20c23aa2ec71bfba382593fac2b242e0c233e85da8b6e2ad00cd289"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.682068 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rbdrt" event={"ID":"cbfccbea-2fcb-4c86-b24e-963961a2cd3a","Type":"ContainerStarted","Data":"bbb5ed6b0d027c62d015d24e308fca3610f40e989fd77c7dcc7db60ddcfa8870"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.683607 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lsp6v" event={"ID":"210b630c-d271-49ec-9381-795e5e92cf4e","Type":"ContainerStarted","Data":"ed6a4664410b076b040b42266f5327062f43c5d7f2381fb19e7ad63c530634fc"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.683653 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lsp6v" event={"ID":"210b630c-d271-49ec-9381-795e5e92cf4e","Type":"ContainerStarted","Data":"dccf6825bf2ecae2664691489a94ead5b56e5f9f46db47371a7c22277cb9e25b"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.684194 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lsp6v" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.684998 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khf8m" event={"ID":"fa215132-e3f4-460b-85d2-19d7dff33c1a","Type":"ContainerStarted","Data":"ed938c8a9605b0ac87c270a67eb6f35973aeb9295769c515f829f1219fced608"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.700046 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw" event={"ID":"9dad1493-ca2b-4e2d-8c5f-8374b30deefa","Type":"ContainerStarted","Data":"13e640647155b05de734518ac044b7504a96d77c47ff345994fba7fa719c7f45"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.703392 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-xpf6f" podStartSLOduration=102.70337897 podStartE2EDuration="1m42.70337897s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:33.697711883 +0000 UTC m=+123.248610562" watchObservedRunningTime="2026-02-17 17:46:33.70337897 +0000 UTC m=+123.254277669" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.720265 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:33 crc kubenswrapper[4680]: E0217 17:46:33.720972 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:34.220953245 +0000 UTC m=+123.771851974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.722484 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" event={"ID":"387f58c3-60a3-4f10-a29c-3bd2afca3e5d","Type":"ContainerStarted","Data":"3a839c81ccaac22a3362571f143e8023f26c5c5c5cef8fe3bd583a66202302bd"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.732895 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" podStartSLOduration=93.732877715 podStartE2EDuration="1m33.732877715s" podCreationTimestamp="2026-02-17 17:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:33.730570854 +0000 UTC m=+123.281469523" watchObservedRunningTime="2026-02-17 17:46:33.732877715 +0000 UTC m=+123.283776394" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.741777 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv2j" event={"ID":"a1816a7c-9c0b-49db-b812-dd9b2c605760","Type":"ContainerStarted","Data":"8b0acee9714ceca037074746d4686a7627dd3ab158a1df3df88b2be19710fe89"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.756862 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.757096 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.774626 4680 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-w88hs container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.774690 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" podUID="387f58c3-60a3-4f10-a29c-3bd2afca3e5d" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.775471 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" event={"ID":"fb8e2703-d2bb-403c-9912-e7d9a1e2475d","Type":"ContainerStarted","Data":"cfa162a12b91ad5a9d039db0401911fa7b62e062249c8bf5c2e02ae7ce980dc7"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.776908 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.776959 4680 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-prsk2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.777008 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" podUID="fb8e2703-d2bb-403c-9912-e7d9a1e2475d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.796610 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-74j8f" event={"ID":"2953a7b7-7e10-4037-94d5-08322d5b7521","Type":"ContainerStarted","Data":"8872ab94ccd622bd1e0cfe5ea10fbe3ef7803d8b710e00c9c1edab03cbd5876e"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.813136 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" podStartSLOduration=102.813111307 podStartE2EDuration="1m42.813111307s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:33.796592774 +0000 UTC m=+123.347491453" watchObservedRunningTime="2026-02-17 17:46:33.813111307 +0000 UTC m=+123.364009996" Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.833715 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:33 crc kubenswrapper[4680]: E0217 17:46:33.835027 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:34.335008108 +0000 UTC m=+123.885906787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.853092 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t9gdn" event={"ID":"ed1c57d6-3991-4ebb-9149-63e38e19dc40","Type":"ContainerStarted","Data":"4cc154261124020a4b7be820cd9620dbc115d3fcd61329eaab6df60d6da8920e"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.881066 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kpzts" event={"ID":"f04bff26-dca4-4b98-82e8-582cf4305ee3","Type":"ContainerStarted","Data":"724881e09f3b63da96e7d185594d839d762f71f25b8b148798c2c1f418dcfc1a"} Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.935530 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:33 crc kubenswrapper[4680]: E0217 17:46:33.935852 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:34.43584171 +0000 UTC m=+123.986740389 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:33 crc kubenswrapper[4680]: I0217 17:46:33.980640 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" podStartSLOduration=102.980624811 podStartE2EDuration="1m42.980624811s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:33.904110815 +0000 UTC m=+123.455009504" watchObservedRunningTime="2026-02-17 17:46:33.980624811 +0000 UTC m=+123.531523490" Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.039962 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:34 crc kubenswrapper[4680]: E0217 17:46:34.041197 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:34.541181443 +0000 UTC m=+124.092080132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.058294 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s9v7z" podStartSLOduration=103.058268803 podStartE2EDuration="1m43.058268803s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:33.982164019 +0000 UTC m=+123.533062698" watchObservedRunningTime="2026-02-17 17:46:34.058268803 +0000 UTC m=+123.609167482" Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.060570 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5c84c" podStartSLOduration=103.060560754 podStartE2EDuration="1m43.060560754s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:34.034384881 +0000 UTC m=+123.585283560" watchObservedRunningTime="2026-02-17 17:46:34.060560754 +0000 UTC m=+123.611459433" Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.146072 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:34 crc kubenswrapper[4680]: E0217 17:46:34.146538 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:34.646523215 +0000 UTC m=+124.197421894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.221930 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdrlm" podStartSLOduration=103.221916016 podStartE2EDuration="1m43.221916016s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:34.123252202 +0000 UTC m=+123.674150881" watchObservedRunningTime="2026-02-17 17:46:34.221916016 +0000 UTC m=+123.772814695" Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.247823 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:34 crc kubenswrapper[4680]: E0217 17:46:34.248135 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:34.748120071 +0000 UTC m=+124.299018750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.301728 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.308804 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lsp6v" podStartSLOduration=103.308786875 podStartE2EDuration="1m43.308786875s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:34.222139883 +0000 UTC m=+123.773038562" watchObservedRunningTime="2026-02-17 17:46:34.308786875 +0000 UTC m=+123.859685554" Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.350669 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:34 crc kubenswrapper[4680]: E0217 17:46:34.351093 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:34.851071489 +0000 UTC m=+124.401970158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.361504 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:34 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:34 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:34 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.361552 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.363020 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c462h" podStartSLOduration=103.362999279 podStartE2EDuration="1m43.362999279s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:34.311008354 +0000 UTC m=+123.861907033" watchObservedRunningTime="2026-02-17 17:46:34.362999279 +0000 UTC m=+123.913897968" Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.364264 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" podStartSLOduration=103.364255868 podStartE2EDuration="1m43.364255868s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:34.361361448 +0000 UTC m=+123.912260147" watchObservedRunningTime="2026-02-17 17:46:34.364255868 +0000 UTC m=+123.915154547" Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.452426 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:34 crc kubenswrapper[4680]: E0217 17:46:34.453272 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:34.953218932 +0000 UTC m=+124.504117611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.514365 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" podStartSLOduration=103.514346811 podStartE2EDuration="1m43.514346811s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:34.407854903 +0000 UTC m=+123.958753582" watchObservedRunningTime="2026-02-17 17:46:34.514346811 +0000 UTC m=+124.065245490" Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.554141 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:34 crc kubenswrapper[4680]: E0217 17:46:34.554552 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:35.054538369 +0000 UTC m=+124.605437048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.599956 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" podStartSLOduration=103.599940269 podStartE2EDuration="1m43.599940269s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:34.519655566 +0000 UTC m=+124.070554245" watchObservedRunningTime="2026-02-17 17:46:34.599940269 +0000 UTC m=+124.150838948" Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.655800 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:34 crc kubenswrapper[4680]: E0217 17:46:34.656436 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:35.156419284 +0000 UTC m=+124.707317953 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.681256 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kpzts" podStartSLOduration=103.681237585 podStartE2EDuration="1m43.681237585s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:34.603283733 +0000 UTC m=+124.154182412" watchObservedRunningTime="2026-02-17 17:46:34.681237585 +0000 UTC m=+124.232136264" Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.751152 4680 csr.go:261] certificate signing request csr-ndhqd is approved, waiting to be issued Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.759367 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:34 crc kubenswrapper[4680]: E0217 17:46:34.759741 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:35.259729953 +0000 UTC m=+124.810628632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.760869 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-t9gdn" podStartSLOduration=103.760852208 podStartE2EDuration="1m43.760852208s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:34.758797225 +0000 UTC m=+124.309695904" watchObservedRunningTime="2026-02-17 17:46:34.760852208 +0000 UTC m=+124.311750887" Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.766012 4680 csr.go:257] certificate signing request csr-ndhqd is issued Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.860439 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:34 crc kubenswrapper[4680]: E0217 17:46:34.860852 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:35.360835474 +0000 UTC m=+124.911734153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.904942 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rbdrt" event={"ID":"cbfccbea-2fcb-4c86-b24e-963961a2cd3a","Type":"ContainerStarted","Data":"a0e05b96b5ac7a80ce6c130758457c13769548082bbc97987a43998ba54d96ad"} Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.914430 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c462h" event={"ID":"67e5bb04-4920-4f51-a729-0af77b16490e","Type":"ContainerStarted","Data":"b68df50d2051cced134e8a448f1ae34f8f7ded36585893085ce265424bbee304"} Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.921554 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" event={"ID":"05be763f-68c8-46fe-a8c5-f30ee5afc5db","Type":"ContainerStarted","Data":"581ba835126d57b32de8e2e0acfa897d0b92abcd69476fbcb2a697d01c23b4f2"} Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.921601 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" event={"ID":"05be763f-68c8-46fe-a8c5-f30ee5afc5db","Type":"ContainerStarted","Data":"0d228472d7abaa9378888c34bf5749200e9acd8ab9df38d3aebd689ee591ae09"} Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.926859 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khf8m" event={"ID":"fa215132-e3f4-460b-85d2-19d7dff33c1a","Type":"ContainerStarted","Data":"38fd5a664a0809c23d5220f58e42b0af58c41e9723e6f2ede8a883dae3e309b1"} Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.928492 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv2j" event={"ID":"a1816a7c-9c0b-49db-b812-dd9b2c605760","Type":"ContainerStarted","Data":"4efdfc5456a42f6b4ef89e2dd777020324fec317fb6a0870b41c07ad36cf04ee"} Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.928613 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rbdrt" podStartSLOduration=103.928594519 podStartE2EDuration="1m43.928594519s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:34.926461403 +0000 UTC m=+124.477360082" watchObservedRunningTime="2026-02-17 17:46:34.928594519 +0000 UTC m=+124.479493198" Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.931508 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-rptfq" event={"ID":"69d9c3bb-1b14-45a5-8aa1-29c8b79fd809","Type":"ContainerStarted","Data":"8d488cdbcc29f59cc311dec2f6611853ad8dc597db15fd961ad55b9206875d43"} Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.937787 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" event={"ID":"fb8e2703-d2bb-403c-9912-e7d9a1e2475d","Type":"ContainerStarted","Data":"d021657e8f4a9058e2dc0de768e18823da37b9ce61313f64f0970173293adc28"} Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.938214 4680 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-prsk2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.938256 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" podUID="fb8e2703-d2bb-403c-9912-e7d9a1e2475d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.942240 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gd8pp" event={"ID":"789c22f5-7717-4e0f-9df4-6bc17ed6a932","Type":"ContainerStarted","Data":"f437447a624eefb0e653082a60e31f8871b35034867f56411d168fbdbdd21fef"} Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.949112 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m5zvm" event={"ID":"feae09d1-ce01-4b1a-a198-f267e94d9190","Type":"ContainerStarted","Data":"e8c4a2b2be5922151a0a797c202af56d44ace029655ff3f91222ec670bf1d206"} Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.957727 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw" event={"ID":"9dad1493-ca2b-4e2d-8c5f-8374b30deefa","Type":"ContainerStarted","Data":"66dd54c1fd668c988b725ba50a3cc6ff4f61eb379c2dbb90659fbb097e4ef8ee"} Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.957773 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw" event={"ID":"9dad1493-ca2b-4e2d-8c5f-8374b30deefa","Type":"ContainerStarted","Data":"75339a775d341d378ec6a3d6dcf00cc47ed847c1ea466b96e132dab481a537f3"} Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.961413 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:34 crc kubenswrapper[4680]: E0217 17:46:34.961746 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:35.461734698 +0000 UTC m=+125.012633377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.965268 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vjddm" event={"ID":"80bd8e1f-fde7-45e5-a482-8dd2d336bb3e","Type":"ContainerStarted","Data":"8aae4c85b9bb41f13fc860fdd9c7fc5e0a7004757e9983444fa83a4d6107af70"} Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.970865 4680 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-fxsf8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.971249 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" podUID="5c3da943-badb-4d63-a3e2-541a9ecf9347" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.971356 4680 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rjbs8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 17 17:46:34 crc kubenswrapper[4680]: I0217 17:46:34.971423 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" podUID="17edf562-a2eb-4263-bbce-c65013fd645f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.062205 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:35 crc kubenswrapper[4680]: E0217 17:46:35.062384 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:35.562355504 +0000 UTC m=+125.113254183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.063569 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:35 crc kubenswrapper[4680]: E0217 17:46:35.070262 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:35.570239369 +0000 UTC m=+125.121138048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.179823 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:35 crc kubenswrapper[4680]: E0217 17:46:35.181122 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:35.681082012 +0000 UTC m=+125.231980691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.223052 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-m5zvm" podStartSLOduration=9.223034446 podStartE2EDuration="9.223034446s" podCreationTimestamp="2026-02-17 17:46:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:35.219584438 +0000 UTC m=+124.770483117" watchObservedRunningTime="2026-02-17 17:46:35.223034446 +0000 UTC m=+124.773933125" Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.224175 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-khf8m" podStartSLOduration=104.22415405 podStartE2EDuration="1m44.22415405s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:35.084792041 +0000 UTC m=+124.635690720" watchObservedRunningTime="2026-02-17 17:46:35.22415405 +0000 UTC m=+124.775052729" Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.282040 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:35 crc kubenswrapper[4680]: E0217 17:46:35.282401 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:35.782388979 +0000 UTC m=+125.333287648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.344255 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:35 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:35 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:35 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.344330 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.374638 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-rptfq" podStartSLOduration=104.374619214 podStartE2EDuration="1m44.374619214s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:35.37351409 +0000 UTC m=+124.924412769" watchObservedRunningTime="2026-02-17 17:46:35.374619214 +0000 UTC m=+124.925517893" Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.382833 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:35 crc kubenswrapper[4680]: E0217 17:46:35.383251 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:35.883233031 +0000 UTC m=+125.434131710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.477824 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x9rhw" podStartSLOduration=104.47780892 podStartE2EDuration="1m44.47780892s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:35.469977887 +0000 UTC m=+125.020876566" watchObservedRunningTime="2026-02-17 17:46:35.47780892 +0000 UTC m=+125.028707599" Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.484539 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:35 crc kubenswrapper[4680]: E0217 17:46:35.484910 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:35.984893109 +0000 UTC m=+125.535791788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.555309 4680 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-f2h2c container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.555488 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" podUID="ad93e803-b1ee-4401-a6c4-0f510b6b8735" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.555641 4680 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-f2h2c container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.555716 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" podUID="ad93e803-b1ee-4401-a6c4-0f510b6b8735" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.586185 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:35 crc kubenswrapper[4680]: E0217 17:46:35.586620 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:36.086603509 +0000 UTC m=+125.637502188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.687726 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:35 crc kubenswrapper[4680]: E0217 17:46:35.688108 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:36.188092352 +0000 UTC m=+125.738991031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.767118 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-17 17:41:34 +0000 UTC, rotation deadline is 2026-12-05 23:35:56.593257288 +0000 UTC Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.767181 4680 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6989h49m20.826100731s for next certificate rotation Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.788918 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:35 crc kubenswrapper[4680]: E0217 17:46:35.789438 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:36.289419499 +0000 UTC m=+125.840318168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.891075 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:35 crc kubenswrapper[4680]: E0217 17:46:35.891564 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:36.391552592 +0000 UTC m=+125.942451271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.915138 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2h2c" Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.968258 4680 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-4bdxb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.968351 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" podUID="0b37c5d2-3899-45db-b4a3-976af83ba107" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.974189 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-74j8f" event={"ID":"2953a7b7-7e10-4037-94d5-08322d5b7521","Type":"ContainerStarted","Data":"2045c540a37fb2130425790e2600a49925d15c4caa2b6fd9b9251e332ce9d972"} Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.976152 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv2j" event={"ID":"a1816a7c-9c0b-49db-b812-dd9b2c605760","Type":"ContainerStarted","Data":"89d370b1273d0ca67556789d3cdd380a4f9306dca58766bc2e06d164d266afb5"} Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.978437 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gd8pp" event={"ID":"789c22f5-7717-4e0f-9df4-6bc17ed6a932","Type":"ContainerStarted","Data":"75723d1b0ce969fbbe47108e9e32cc031e3dc70148fe1b743313e039b7b3bc20"} Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.980311 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vjddm" event={"ID":"80bd8e1f-fde7-45e5-a482-8dd2d336bb3e","Type":"ContainerStarted","Data":"ee99586e911fe019f3303592fb3343f71aed2e81836a68321156aa0fcf76b0bf"} Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.980943 4680 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rjbs8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.980984 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" podUID="17edf562-a2eb-4263-bbce-c65013fd645f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.981960 4680 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-prsk2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.981986 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" podUID="fb8e2703-d2bb-403c-9912-e7d9a1e2475d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 17 17:46:35 crc kubenswrapper[4680]: I0217 17:46:35.992426 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:35 crc kubenswrapper[4680]: E0217 17:46:35.993025 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:36.493006384 +0000 UTC m=+126.043905063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.018333 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fxsf8" Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.077554 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-vjddm" Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.094130 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:36 crc kubenswrapper[4680]: E0217 17:46:36.097200 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:36.597183979 +0000 UTC m=+126.148082658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.142796 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bzv2j" podStartSLOduration=105.142779766 podStartE2EDuration="1m45.142779766s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:36.079834241 +0000 UTC m=+125.630732910" watchObservedRunningTime="2026-02-17 17:46:36.142779766 +0000 UTC m=+125.693678445" Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.143694 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-vjddm" podStartSLOduration=10.143685624 podStartE2EDuration="10.143685624s" podCreationTimestamp="2026-02-17 17:46:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:36.141992911 +0000 UTC m=+125.692891590" watchObservedRunningTime="2026-02-17 17:46:36.143685624 +0000 UTC m=+125.694584303" Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.207064 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:36 crc kubenswrapper[4680]: E0217 17:46:36.207286 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:36.707257139 +0000 UTC m=+126.258155818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.207496 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:36 crc kubenswrapper[4680]: E0217 17:46:36.207831 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:36.707816166 +0000 UTC m=+126.258714845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.253091 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-gd8pp" podStartSLOduration=105.253072472 podStartE2EDuration="1m45.253072472s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:36.252520345 +0000 UTC m=+125.803419024" watchObservedRunningTime="2026-02-17 17:46:36.253072472 +0000 UTC m=+125.803971151" Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.312053 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:36 crc kubenswrapper[4680]: E0217 17:46:36.312208 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:36.812177588 +0000 UTC m=+126.363076267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.312635 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:36 crc kubenswrapper[4680]: E0217 17:46:36.312925 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:36.812917211 +0000 UTC m=+126.363815890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.336855 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gt6kd" podStartSLOduration=105.336835894 podStartE2EDuration="1m45.336835894s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:36.335942906 +0000 UTC m=+125.886841605" watchObservedRunningTime="2026-02-17 17:46:36.336835894 +0000 UTC m=+125.887734563" Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.345777 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:36 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:36 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:36 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.345864 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.414103 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:36 crc kubenswrapper[4680]: E0217 17:46:36.414604 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:36.914584579 +0000 UTC m=+126.465483258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.515598 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:36 crc kubenswrapper[4680]: E0217 17:46:36.519596 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:37.019582271 +0000 UTC m=+126.570480950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.616819 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:36 crc kubenswrapper[4680]: E0217 17:46:36.617043 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:37.117009678 +0000 UTC m=+126.667908367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.617109 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:36 crc kubenswrapper[4680]: E0217 17:46:36.617473 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:37.117450801 +0000 UTC m=+126.668349530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.718101 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:36 crc kubenswrapper[4680]: E0217 17:46:36.718282 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:37.218255353 +0000 UTC m=+126.769154032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.718354 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:36 crc kubenswrapper[4680]: E0217 17:46:36.718936 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:37.218928683 +0000 UTC m=+126.769827352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.819845 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:36 crc kubenswrapper[4680]: E0217 17:46:36.820001 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:37.319974882 +0000 UTC m=+126.870873561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.820167 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:36 crc kubenswrapper[4680]: E0217 17:46:36.820472 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:37.320458827 +0000 UTC m=+126.871357506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.921392 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:36 crc kubenswrapper[4680]: E0217 17:46:36.921705 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:37.421689952 +0000 UTC m=+126.972588631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.981350 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4bdxb" Feb 17 17:46:36 crc kubenswrapper[4680]: I0217 17:46:36.988184 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-74j8f" event={"ID":"2953a7b7-7e10-4037-94d5-08322d5b7521","Type":"ContainerStarted","Data":"d55faed7dba8af9bb9967c010ef93c43d7460fd1d26107116acd8a0452ec75bc"} Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.022903 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:37 crc kubenswrapper[4680]: E0217 17:46:37.023488 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:37.523471694 +0000 UTC m=+127.074370373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.124701 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:37 crc kubenswrapper[4680]: E0217 17:46:37.125336 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:37.625305737 +0000 UTC m=+127.176204416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.125619 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:37 crc kubenswrapper[4680]: E0217 17:46:37.126037 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:37.626027359 +0000 UTC m=+127.176926038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.226511 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:37 crc kubenswrapper[4680]: E0217 17:46:37.226741 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:37.726710417 +0000 UTC m=+127.277609106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.226986 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:37 crc kubenswrapper[4680]: E0217 17:46:37.227338 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:37.727326456 +0000 UTC m=+127.278225135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.327543 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:37 crc kubenswrapper[4680]: E0217 17:46:37.327703 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:37.827669753 +0000 UTC m=+127.378568442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.327888 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:37 crc kubenswrapper[4680]: E0217 17:46:37.328358 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:37.828329164 +0000 UTC m=+127.379227843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.343661 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:37 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:37 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:37 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.343709 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.428662 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:37 crc kubenswrapper[4680]: E0217 17:46:37.429067 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:37.929046523 +0000 UTC m=+127.479945202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.530529 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:37 crc kubenswrapper[4680]: E0217 17:46:37.530810 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:38.030798843 +0000 UTC m=+127.581697522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.631333 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:37 crc kubenswrapper[4680]: E0217 17:46:37.631603 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:38.131572183 +0000 UTC m=+127.682470872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.631872 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:37 crc kubenswrapper[4680]: E0217 17:46:37.632276 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:38.132265175 +0000 UTC m=+127.683163924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.732604 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:37 crc kubenswrapper[4680]: E0217 17:46:37.732795 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:38.232766617 +0000 UTC m=+127.783665286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.732904 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:37 crc kubenswrapper[4680]: E0217 17:46:37.733260 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:38.233253042 +0000 UTC m=+127.784151721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.833609 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:37 crc kubenswrapper[4680]: E0217 17:46:37.833778 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:38.333754684 +0000 UTC m=+127.884653363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.834024 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:37 crc kubenswrapper[4680]: E0217 17:46:37.834352 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:38.334294091 +0000 UTC m=+127.885192770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:37 crc kubenswrapper[4680]: I0217 17:46:37.934456 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:37 crc kubenswrapper[4680]: E0217 17:46:37.934848 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:38.434833274 +0000 UTC m=+127.985731953 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.000189 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-74j8f" event={"ID":"2953a7b7-7e10-4037-94d5-08322d5b7521","Type":"ContainerStarted","Data":"73c2a17059246b826678cd7dcd2511a929fd02afdcb0eb9656816a43904c1973"} Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.036624 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:38 crc kubenswrapper[4680]: E0217 17:46:38.037217 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:38.537197113 +0000 UTC m=+128.088095872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.138530 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:38 crc kubenswrapper[4680]: E0217 17:46:38.139010 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:38.638980015 +0000 UTC m=+128.189878704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.147743 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qdnhz"] Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.148656 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdnhz" Feb 17 17:46:38 crc kubenswrapper[4680]: W0217 17:46:38.181907 4680 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Feb 17 17:46:38 crc kubenswrapper[4680]: E0217 17:46:38.182154 4680 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.185586 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qdnhz"] Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.240362 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08188de-38b5-4e41-b12d-cf5a1cf488ec-catalog-content\") pod \"certified-operators-qdnhz\" (UID: \"a08188de-38b5-4e41-b12d-cf5a1cf488ec\") " pod="openshift-marketplace/certified-operators-qdnhz" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.240681 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08188de-38b5-4e41-b12d-cf5a1cf488ec-utilities\") pod \"certified-operators-qdnhz\" (UID: \"a08188de-38b5-4e41-b12d-cf5a1cf488ec\") " pod="openshift-marketplace/certified-operators-qdnhz" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.240801 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.240968 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw2jc\" (UniqueName: \"kubernetes.io/projected/a08188de-38b5-4e41-b12d-cf5a1cf488ec-kube-api-access-gw2jc\") pod \"certified-operators-qdnhz\" (UID: \"a08188de-38b5-4e41-b12d-cf5a1cf488ec\") " pod="openshift-marketplace/certified-operators-qdnhz" Feb 17 17:46:38 crc kubenswrapper[4680]: E0217 17:46:38.241407 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:38.741392787 +0000 UTC m=+128.292291466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.343078 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.343287 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw2jc\" (UniqueName: \"kubernetes.io/projected/a08188de-38b5-4e41-b12d-cf5a1cf488ec-kube-api-access-gw2jc\") pod \"certified-operators-qdnhz\" (UID: \"a08188de-38b5-4e41-b12d-cf5a1cf488ec\") " pod="openshift-marketplace/certified-operators-qdnhz" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.343364 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08188de-38b5-4e41-b12d-cf5a1cf488ec-catalog-content\") pod \"certified-operators-qdnhz\" (UID: \"a08188de-38b5-4e41-b12d-cf5a1cf488ec\") " pod="openshift-marketplace/certified-operators-qdnhz" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.343384 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08188de-38b5-4e41-b12d-cf5a1cf488ec-utilities\") pod \"certified-operators-qdnhz\" (UID: \"a08188de-38b5-4e41-b12d-cf5a1cf488ec\") " pod="openshift-marketplace/certified-operators-qdnhz" Feb 17 17:46:38 crc kubenswrapper[4680]: E0217 17:46:38.343740 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:38.843711245 +0000 UTC m=+128.394609924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.343775 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08188de-38b5-4e41-b12d-cf5a1cf488ec-utilities\") pod \"certified-operators-qdnhz\" (UID: \"a08188de-38b5-4e41-b12d-cf5a1cf488ec\") " pod="openshift-marketplace/certified-operators-qdnhz" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.344101 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08188de-38b5-4e41-b12d-cf5a1cf488ec-catalog-content\") pod \"certified-operators-qdnhz\" (UID: \"a08188de-38b5-4e41-b12d-cf5a1cf488ec\") " pod="openshift-marketplace/certified-operators-qdnhz" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.346913 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:38 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:38 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:38 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.347232 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.354923 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6hgx6"] Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.357235 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hgx6" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.365980 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.396428 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw2jc\" (UniqueName: \"kubernetes.io/projected/a08188de-38b5-4e41-b12d-cf5a1cf488ec-kube-api-access-gw2jc\") pod \"certified-operators-qdnhz\" (UID: \"a08188de-38b5-4e41-b12d-cf5a1cf488ec\") " pod="openshift-marketplace/certified-operators-qdnhz" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.442399 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6hgx6"] Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.444390 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ea8ef6d-a9c6-477e-89be-fa625c176d42-utilities\") pod \"community-operators-6hgx6\" (UID: \"6ea8ef6d-a9c6-477e-89be-fa625c176d42\") " pod="openshift-marketplace/community-operators-6hgx6" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.444442 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.444531 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ea8ef6d-a9c6-477e-89be-fa625c176d42-catalog-content\") pod \"community-operators-6hgx6\" (UID: \"6ea8ef6d-a9c6-477e-89be-fa625c176d42\") " pod="openshift-marketplace/community-operators-6hgx6" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.444578 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfmq6\" (UniqueName: \"kubernetes.io/projected/6ea8ef6d-a9c6-477e-89be-fa625c176d42-kube-api-access-sfmq6\") pod \"community-operators-6hgx6\" (UID: \"6ea8ef6d-a9c6-477e-89be-fa625c176d42\") " pod="openshift-marketplace/community-operators-6hgx6" Feb 17 17:46:38 crc kubenswrapper[4680]: E0217 17:46:38.444918 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:38.944905669 +0000 UTC m=+128.495804348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.502212 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.507429 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4spzf"] Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.508511 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4spzf" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.525308 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4spzf"] Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.545978 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.546513 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ea8ef6d-a9c6-477e-89be-fa625c176d42-utilities\") pod \"community-operators-6hgx6\" (UID: \"6ea8ef6d-a9c6-477e-89be-fa625c176d42\") " pod="openshift-marketplace/community-operators-6hgx6" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.546543 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14614809-63c8-4553-aa29-c129132071b7-utilities\") pod \"certified-operators-4spzf\" (UID: \"14614809-63c8-4553-aa29-c129132071b7\") " pod="openshift-marketplace/certified-operators-4spzf" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.546581 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14614809-63c8-4553-aa29-c129132071b7-catalog-content\") pod \"certified-operators-4spzf\" (UID: \"14614809-63c8-4553-aa29-c129132071b7\") " pod="openshift-marketplace/certified-operators-4spzf" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.546631 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ea8ef6d-a9c6-477e-89be-fa625c176d42-catalog-content\") pod \"community-operators-6hgx6\" (UID: \"6ea8ef6d-a9c6-477e-89be-fa625c176d42\") " pod="openshift-marketplace/community-operators-6hgx6" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.546680 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfmq6\" (UniqueName: \"kubernetes.io/projected/6ea8ef6d-a9c6-477e-89be-fa625c176d42-kube-api-access-sfmq6\") pod \"community-operators-6hgx6\" (UID: \"6ea8ef6d-a9c6-477e-89be-fa625c176d42\") " pod="openshift-marketplace/community-operators-6hgx6" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.546739 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szfwf\" (UniqueName: \"kubernetes.io/projected/14614809-63c8-4553-aa29-c129132071b7-kube-api-access-szfwf\") pod \"certified-operators-4spzf\" (UID: \"14614809-63c8-4553-aa29-c129132071b7\") " pod="openshift-marketplace/certified-operators-4spzf" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.547219 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ea8ef6d-a9c6-477e-89be-fa625c176d42-utilities\") pod \"community-operators-6hgx6\" (UID: \"6ea8ef6d-a9c6-477e-89be-fa625c176d42\") " pod="openshift-marketplace/community-operators-6hgx6" Feb 17 17:46:38 crc kubenswrapper[4680]: E0217 17:46:38.547520 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:39.047499975 +0000 UTC m=+128.598398664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.547991 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ea8ef6d-a9c6-477e-89be-fa625c176d42-catalog-content\") pod \"community-operators-6hgx6\" (UID: \"6ea8ef6d-a9c6-477e-89be-fa625c176d42\") " pod="openshift-marketplace/community-operators-6hgx6" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.584462 4680 patch_prober.go:28] interesting pod/downloads-7954f5f757-z9fcm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.584691 4680 patch_prober.go:28] interesting pod/downloads-7954f5f757-z9fcm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.584756 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z9fcm" podUID="8e87e04c-2d08-4f13-8b5a-db27f849132e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.584863 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-z9fcm" podUID="8e87e04c-2d08-4f13-8b5a-db27f849132e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.590844 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfmq6\" (UniqueName: \"kubernetes.io/projected/6ea8ef6d-a9c6-477e-89be-fa625c176d42-kube-api-access-sfmq6\") pod \"community-operators-6hgx6\" (UID: \"6ea8ef6d-a9c6-477e-89be-fa625c176d42\") " pod="openshift-marketplace/community-operators-6hgx6" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.648956 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14614809-63c8-4553-aa29-c129132071b7-utilities\") pod \"certified-operators-4spzf\" (UID: \"14614809-63c8-4553-aa29-c129132071b7\") " pod="openshift-marketplace/certified-operators-4spzf" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.648997 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.649027 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14614809-63c8-4553-aa29-c129132071b7-catalog-content\") pod \"certified-operators-4spzf\" (UID: \"14614809-63c8-4553-aa29-c129132071b7\") " pod="openshift-marketplace/certified-operators-4spzf" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.649116 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szfwf\" (UniqueName: \"kubernetes.io/projected/14614809-63c8-4553-aa29-c129132071b7-kube-api-access-szfwf\") pod \"certified-operators-4spzf\" (UID: \"14614809-63c8-4553-aa29-c129132071b7\") " pod="openshift-marketplace/certified-operators-4spzf" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.649738 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14614809-63c8-4553-aa29-c129132071b7-utilities\") pod \"certified-operators-4spzf\" (UID: \"14614809-63c8-4553-aa29-c129132071b7\") " pod="openshift-marketplace/certified-operators-4spzf" Feb 17 17:46:38 crc kubenswrapper[4680]: E0217 17:46:38.649899 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:39.149883846 +0000 UTC m=+128.700782525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.649958 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14614809-63c8-4553-aa29-c129132071b7-catalog-content\") pod \"certified-operators-4spzf\" (UID: \"14614809-63c8-4553-aa29-c129132071b7\") " pod="openshift-marketplace/certified-operators-4spzf" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.683581 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hgx6" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.690497 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szfwf\" (UniqueName: \"kubernetes.io/projected/14614809-63c8-4553-aa29-c129132071b7-kube-api-access-szfwf\") pod \"certified-operators-4spzf\" (UID: \"14614809-63c8-4553-aa29-c129132071b7\") " pod="openshift-marketplace/certified-operators-4spzf" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.723255 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d4hnz"] Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.724161 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d4hnz" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.733817 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.755866 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d4hnz"] Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.756345 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.756736 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7d3d64c-68b5-430e-9370-32d50d6efaa0-catalog-content\") pod \"community-operators-d4hnz\" (UID: \"f7d3d64c-68b5-430e-9370-32d50d6efaa0\") " pod="openshift-marketplace/community-operators-d4hnz" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.756789 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8tqr\" (UniqueName: \"kubernetes.io/projected/f7d3d64c-68b5-430e-9370-32d50d6efaa0-kube-api-access-n8tqr\") pod \"community-operators-d4hnz\" (UID: \"f7d3d64c-68b5-430e-9370-32d50d6efaa0\") " pod="openshift-marketplace/community-operators-d4hnz" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.756826 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7d3d64c-68b5-430e-9370-32d50d6efaa0-utilities\") pod \"community-operators-d4hnz\" (UID: \"f7d3d64c-68b5-430e-9370-32d50d6efaa0\") " pod="openshift-marketplace/community-operators-d4hnz" Feb 17 17:46:38 crc kubenswrapper[4680]: E0217 17:46:38.756941 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:39.256927271 +0000 UTC m=+128.807825950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.775564 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.792411 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w88hs" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.855639 4680 patch_prober.go:28] interesting pod/apiserver-76f77b778f-dp7b9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 17:46:38 crc kubenswrapper[4680]: [+]log ok Feb 17 17:46:38 crc kubenswrapper[4680]: [+]etcd ok Feb 17 17:46:38 crc kubenswrapper[4680]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 17:46:38 crc kubenswrapper[4680]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 17:46:38 crc kubenswrapper[4680]: [+]poststarthook/max-in-flight-filter ok Feb 17 17:46:38 crc kubenswrapper[4680]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 17:46:38 crc kubenswrapper[4680]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 17 17:46:38 crc kubenswrapper[4680]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 17 17:46:38 crc kubenswrapper[4680]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 17 17:46:38 crc kubenswrapper[4680]: [+]poststarthook/project.openshift.io-projectcache ok Feb 17 17:46:38 crc kubenswrapper[4680]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 17 17:46:38 crc kubenswrapper[4680]: [+]poststarthook/openshift.io-startinformers ok Feb 17 17:46:38 crc kubenswrapper[4680]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 17 17:46:38 crc kubenswrapper[4680]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 17 17:46:38 crc kubenswrapper[4680]: livez check failed Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.855701 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" podUID="13cf25e6-413f-472e-8178-439df59c10d7" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.862540 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7d3d64c-68b5-430e-9370-32d50d6efaa0-catalog-content\") pod \"community-operators-d4hnz\" (UID: \"f7d3d64c-68b5-430e-9370-32d50d6efaa0\") " pod="openshift-marketplace/community-operators-d4hnz" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.862602 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8tqr\" (UniqueName: \"kubernetes.io/projected/f7d3d64c-68b5-430e-9370-32d50d6efaa0-kube-api-access-n8tqr\") pod \"community-operators-d4hnz\" (UID: \"f7d3d64c-68b5-430e-9370-32d50d6efaa0\") " pod="openshift-marketplace/community-operators-d4hnz" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.862649 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7d3d64c-68b5-430e-9370-32d50d6efaa0-utilities\") pod \"community-operators-d4hnz\" (UID: \"f7d3d64c-68b5-430e-9370-32d50d6efaa0\") " pod="openshift-marketplace/community-operators-d4hnz" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.862685 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.863784 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7d3d64c-68b5-430e-9370-32d50d6efaa0-utilities\") pod \"community-operators-d4hnz\" (UID: \"f7d3d64c-68b5-430e-9370-32d50d6efaa0\") " pod="openshift-marketplace/community-operators-d4hnz" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.864141 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7d3d64c-68b5-430e-9370-32d50d6efaa0-catalog-content\") pod \"community-operators-d4hnz\" (UID: \"f7d3d64c-68b5-430e-9370-32d50d6efaa0\") " pod="openshift-marketplace/community-operators-d4hnz" Feb 17 17:46:38 crc kubenswrapper[4680]: E0217 17:46:38.864196 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:39.364181783 +0000 UTC m=+128.915080532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.928623 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8tqr\" (UniqueName: \"kubernetes.io/projected/f7d3d64c-68b5-430e-9370-32d50d6efaa0-kube-api-access-n8tqr\") pod \"community-operators-d4hnz\" (UID: \"f7d3d64c-68b5-430e-9370-32d50d6efaa0\") " pod="openshift-marketplace/community-operators-d4hnz" Feb 17 17:46:38 crc kubenswrapper[4680]: I0217 17:46:38.964634 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:38 crc kubenswrapper[4680]: E0217 17:46:38.965010 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:39.464996044 +0000 UTC m=+129.015894723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.017351 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-74j8f" event={"ID":"2953a7b7-7e10-4037-94d5-08322d5b7521","Type":"ContainerStarted","Data":"3ebfb9daab2d412fa70143131a3cdc3919ea50afe2e09fc70b159ec29caa446d"} Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.066549 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:39 crc kubenswrapper[4680]: E0217 17:46:39.066639 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:39.566621212 +0000 UTC m=+129.117519961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.074946 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d4hnz" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.167711 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:39 crc kubenswrapper[4680]: E0217 17:46:39.167943 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:39.667912658 +0000 UTC m=+129.218811337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.168009 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:39 crc kubenswrapper[4680]: E0217 17:46:39.168303 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:39.668291279 +0000 UTC m=+129.219189958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.269808 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:39 crc kubenswrapper[4680]: E0217 17:46:39.270001 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:39.769975468 +0000 UTC m=+129.320874157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.270298 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:39 crc kubenswrapper[4680]: E0217 17:46:39.270644 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:39.770633019 +0000 UTC m=+129.321531698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.295690 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-74j8f" podStartSLOduration=13.295668266 podStartE2EDuration="13.295668266s" podCreationTimestamp="2026-02-17 17:46:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:39.070620165 +0000 UTC m=+128.621518844" watchObservedRunningTime="2026-02-17 17:46:39.295668266 +0000 UTC m=+128.846566955" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.298425 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.299158 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.306770 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.307089 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.307240 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.307585 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4spzf" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.310877 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdnhz" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.328581 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.341489 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.371748 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:39 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:39 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:39 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.371808 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.373233 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.376948 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a8d880a-36c0-43cc-a504-ce61b6ea17ca-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2a8d880a-36c0-43cc-a504-ce61b6ea17ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.377014 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a8d880a-36c0-43cc-a504-ce61b6ea17ca-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2a8d880a-36c0-43cc-a504-ce61b6ea17ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 17:46:39 crc kubenswrapper[4680]: E0217 17:46:39.377908 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:39.877889721 +0000 UTC m=+129.428788410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.478465 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a8d880a-36c0-43cc-a504-ce61b6ea17ca-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2a8d880a-36c0-43cc-a504-ce61b6ea17ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.478512 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a8d880a-36c0-43cc-a504-ce61b6ea17ca-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2a8d880a-36c0-43cc-a504-ce61b6ea17ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.478577 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:39 crc kubenswrapper[4680]: E0217 17:46:39.478866 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:39.978854757 +0000 UTC m=+129.529753426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.479097 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a8d880a-36c0-43cc-a504-ce61b6ea17ca-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2a8d880a-36c0-43cc-a504-ce61b6ea17ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.527952 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a8d880a-36c0-43cc-a504-ce61b6ea17ca-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2a8d880a-36c0-43cc-a504-ce61b6ea17ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.548798 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6hgx6"] Feb 17 17:46:39 crc kubenswrapper[4680]: W0217 17:46:39.574109 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ea8ef6d_a9c6_477e_89be_fa625c176d42.slice/crio-e16604bbea75263178916f5f7a49b78c8fb31169a748d325043621c28f2846a1 WatchSource:0}: Error finding container e16604bbea75263178916f5f7a49b78c8fb31169a748d325043621c28f2846a1: Status 404 returned error can't find the container with id e16604bbea75263178916f5f7a49b78c8fb31169a748d325043621c28f2846a1 Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.579778 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:39 crc kubenswrapper[4680]: E0217 17:46:39.579882 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:40.079867254 +0000 UTC m=+129.630765933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.580868 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:39 crc kubenswrapper[4680]: E0217 17:46:39.581186 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:40.081178835 +0000 UTC m=+129.632077514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.631427 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.631472 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.636535 4680 patch_prober.go:28] interesting pod/console-f9d7485db-t9gdn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.636603 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-t9gdn" podUID="ed1c57d6-3991-4ebb-9149-63e38e19dc40" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.641875 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d4hnz"] Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.677573 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-prsk2" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.682488 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.683036 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:39 crc kubenswrapper[4680]: E0217 17:46:39.684144 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:40.184119442 +0000 UTC m=+129.735018191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.785501 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:39 crc kubenswrapper[4680]: E0217 17:46:39.786143 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:40.286122051 +0000 UTC m=+129.837020770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.886965 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:39 crc kubenswrapper[4680]: E0217 17:46:39.887635 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:40.387621304 +0000 UTC m=+129.938519983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.915876 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qdnhz"] Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.946283 4680 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.986681 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" Feb 17 17:46:39 crc kubenswrapper[4680]: I0217 17:46:39.988537 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:39 crc kubenswrapper[4680]: E0217 17:46:39.988837 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:40.488826607 +0000 UTC m=+130.039725286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.046943 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4spzf"] Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.047954 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d4hnz" event={"ID":"f7d3d64c-68b5-430e-9370-32d50d6efaa0","Type":"ContainerStarted","Data":"b85f0c2626d73082593fac9ec83e2ea9f37352ce6937e14ab354830e761063cf"} Feb 17 17:46:40 crc kubenswrapper[4680]: W0217 17:46:40.056651 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14614809_63c8_4553_aa29_c129132071b7.slice/crio-8215b1596ac0e40c2b67f81d5ce1ac6bdf47a0224965e436534de47ef4cb03a4 WatchSource:0}: Error finding container 8215b1596ac0e40c2b67f81d5ce1ac6bdf47a0224965e436534de47ef4cb03a4: Status 404 returned error can't find the container with id 8215b1596ac0e40c2b67f81d5ce1ac6bdf47a0224965e436534de47ef4cb03a4 Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.070800 4680 generic.go:334] "Generic (PLEG): container finished" podID="6ea8ef6d-a9c6-477e-89be-fa625c176d42" containerID="77af9eaa246cecab5446f790c274287c0c7b12e179c4f2b4a08a3d9d4c5ed7b3" exitCode=0 Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.071018 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hgx6" event={"ID":"6ea8ef6d-a9c6-477e-89be-fa625c176d42","Type":"ContainerDied","Data":"77af9eaa246cecab5446f790c274287c0c7b12e179c4f2b4a08a3d9d4c5ed7b3"} Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.071057 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hgx6" event={"ID":"6ea8ef6d-a9c6-477e-89be-fa625c176d42","Type":"ContainerStarted","Data":"e16604bbea75263178916f5f7a49b78c8fb31169a748d325043621c28f2846a1"} Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.077985 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdnhz" event={"ID":"a08188de-38b5-4e41-b12d-cf5a1cf488ec","Type":"ContainerStarted","Data":"034e3679b202f5e0adf105de050cb91a999c9e000072ea76f0ff320d22444eba"} Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.092227 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:40 crc kubenswrapper[4680]: E0217 17:46:40.092425 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:40.592404884 +0000 UTC m=+130.143303563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.092601 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:40 crc kubenswrapper[4680]: E0217 17:46:40.092926 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 17:46:40.59291374 +0000 UTC m=+130.143812419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z8mj5" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.140838 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.180062 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kkz99"] Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.181936 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkz99" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.186665 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkz99"] Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.193927 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:40 crc kubenswrapper[4680]: E0217 17:46:40.195088 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 17:46:40.695046913 +0000 UTC m=+130.245945582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.204200 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.256281 4680 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-17T17:46:39.946309467Z","Handler":null,"Name":""} Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.275436 4680 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.275483 4680 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.295549 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1f714a-f726-49f5-b611-3b0690ed7245-utilities\") pod \"redhat-marketplace-kkz99\" (UID: \"de1f714a-f726-49f5-b611-3b0690ed7245\") " pod="openshift-marketplace/redhat-marketplace-kkz99" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.295606 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-296x8\" (UniqueName: \"kubernetes.io/projected/de1f714a-f726-49f5-b611-3b0690ed7245-kube-api-access-296x8\") pod \"redhat-marketplace-kkz99\" (UID: \"de1f714a-f726-49f5-b611-3b0690ed7245\") " pod="openshift-marketplace/redhat-marketplace-kkz99" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.295713 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.295742 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1f714a-f726-49f5-b611-3b0690ed7245-catalog-content\") pod \"redhat-marketplace-kkz99\" (UID: \"de1f714a-f726-49f5-b611-3b0690ed7245\") " pod="openshift-marketplace/redhat-marketplace-kkz99" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.347009 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:40 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:40 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:40 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.347069 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.397280 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-296x8\" (UniqueName: \"kubernetes.io/projected/de1f714a-f726-49f5-b611-3b0690ed7245-kube-api-access-296x8\") pod \"redhat-marketplace-kkz99\" (UID: \"de1f714a-f726-49f5-b611-3b0690ed7245\") " pod="openshift-marketplace/redhat-marketplace-kkz99" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.397839 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1f714a-f726-49f5-b611-3b0690ed7245-catalog-content\") pod \"redhat-marketplace-kkz99\" (UID: \"de1f714a-f726-49f5-b611-3b0690ed7245\") " pod="openshift-marketplace/redhat-marketplace-kkz99" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.398311 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1f714a-f726-49f5-b611-3b0690ed7245-catalog-content\") pod \"redhat-marketplace-kkz99\" (UID: \"de1f714a-f726-49f5-b611-3b0690ed7245\") " pod="openshift-marketplace/redhat-marketplace-kkz99" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.398415 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1f714a-f726-49f5-b611-3b0690ed7245-utilities\") pod \"redhat-marketplace-kkz99\" (UID: \"de1f714a-f726-49f5-b611-3b0690ed7245\") " pod="openshift-marketplace/redhat-marketplace-kkz99" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.398762 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1f714a-f726-49f5-b611-3b0690ed7245-utilities\") pod \"redhat-marketplace-kkz99\" (UID: \"de1f714a-f726-49f5-b611-3b0690ed7245\") " pod="openshift-marketplace/redhat-marketplace-kkz99" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.420804 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-296x8\" (UniqueName: \"kubernetes.io/projected/de1f714a-f726-49f5-b611-3b0690ed7245-kube-api-access-296x8\") pod \"redhat-marketplace-kkz99\" (UID: \"de1f714a-f726-49f5-b611-3b0690ed7245\") " pod="openshift-marketplace/redhat-marketplace-kkz99" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.452146 4680 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.452204 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.491820 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-czdr4"] Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.493034 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-czdr4" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.501521 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z8mj5\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.511139 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-czdr4"] Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.535141 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.539835 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkz99" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.601493 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.602550 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03756e56-4c50-4e20-91b1-16fdea59c4a0-utilities\") pod \"redhat-marketplace-czdr4\" (UID: \"03756e56-4c50-4e20-91b1-16fdea59c4a0\") " pod="openshift-marketplace/redhat-marketplace-czdr4" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.602579 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w446\" (UniqueName: \"kubernetes.io/projected/03756e56-4c50-4e20-91b1-16fdea59c4a0-kube-api-access-6w446\") pod \"redhat-marketplace-czdr4\" (UID: \"03756e56-4c50-4e20-91b1-16fdea59c4a0\") " pod="openshift-marketplace/redhat-marketplace-czdr4" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.602621 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03756e56-4c50-4e20-91b1-16fdea59c4a0-catalog-content\") pod \"redhat-marketplace-czdr4\" (UID: \"03756e56-4c50-4e20-91b1-16fdea59c4a0\") " pod="openshift-marketplace/redhat-marketplace-czdr4" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.606536 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.685686 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.694876 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.704361 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w446\" (UniqueName: \"kubernetes.io/projected/03756e56-4c50-4e20-91b1-16fdea59c4a0-kube-api-access-6w446\") pod \"redhat-marketplace-czdr4\" (UID: \"03756e56-4c50-4e20-91b1-16fdea59c4a0\") " pod="openshift-marketplace/redhat-marketplace-czdr4" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.704429 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03756e56-4c50-4e20-91b1-16fdea59c4a0-utilities\") pod \"redhat-marketplace-czdr4\" (UID: \"03756e56-4c50-4e20-91b1-16fdea59c4a0\") " pod="openshift-marketplace/redhat-marketplace-czdr4" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.707066 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03756e56-4c50-4e20-91b1-16fdea59c4a0-utilities\") pod \"redhat-marketplace-czdr4\" (UID: \"03756e56-4c50-4e20-91b1-16fdea59c4a0\") " pod="openshift-marketplace/redhat-marketplace-czdr4" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.707377 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03756e56-4c50-4e20-91b1-16fdea59c4a0-catalog-content\") pod \"redhat-marketplace-czdr4\" (UID: \"03756e56-4c50-4e20-91b1-16fdea59c4a0\") " pod="openshift-marketplace/redhat-marketplace-czdr4" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.707784 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03756e56-4c50-4e20-91b1-16fdea59c4a0-catalog-content\") pod \"redhat-marketplace-czdr4\" (UID: \"03756e56-4c50-4e20-91b1-16fdea59c4a0\") " pod="openshift-marketplace/redhat-marketplace-czdr4" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.737912 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w446\" (UniqueName: \"kubernetes.io/projected/03756e56-4c50-4e20-91b1-16fdea59c4a0-kube-api-access-6w446\") pod \"redhat-marketplace-czdr4\" (UID: \"03756e56-4c50-4e20-91b1-16fdea59c4a0\") " pod="openshift-marketplace/redhat-marketplace-czdr4" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.740116 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkz99"] Feb 17 17:46:40 crc kubenswrapper[4680]: W0217 17:46:40.747576 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde1f714a_f726_49f5_b611_3b0690ed7245.slice/crio-9503389bb276854d5f5c1fe94188f23fe4913c315b92169ae937130e1e6ffab8 WatchSource:0}: Error finding container 9503389bb276854d5f5c1fe94188f23fe4913c315b92169ae937130e1e6ffab8: Status 404 returned error can't find the container with id 9503389bb276854d5f5c1fe94188f23fe4913c315b92169ae937130e1e6ffab8 Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.818171 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-czdr4" Feb 17 17:46:40 crc kubenswrapper[4680]: I0217 17:46:40.935855 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z8mj5"] Feb 17 17:46:40 crc kubenswrapper[4680]: W0217 17:46:40.949010 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c640310_31cf_4feb_a173_5688042c6b6d.slice/crio-7cc1a306007aba064311e0e3df1d90d3dff699860a758162b7f27493230e0246 WatchSource:0}: Error finding container 7cc1a306007aba064311e0e3df1d90d3dff699860a758162b7f27493230e0246: Status 404 returned error can't find the container with id 7cc1a306007aba064311e0e3df1d90d3dff699860a758162b7f27493230e0246 Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.020143 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-czdr4"] Feb 17 17:46:41 crc kubenswrapper[4680]: W0217 17:46:41.041110 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03756e56_4c50_4e20_91b1_16fdea59c4a0.slice/crio-0e98dc2eeee3409eec2ac800cbd37e08d99c03fa9b4c4ee92fd33a40063b8850 WatchSource:0}: Error finding container 0e98dc2eeee3409eec2ac800cbd37e08d99c03fa9b4c4ee92fd33a40063b8850: Status 404 returned error can't find the container with id 0e98dc2eeee3409eec2ac800cbd37e08d99c03fa9b4c4ee92fd33a40063b8850 Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.096262 4680 generic.go:334] "Generic (PLEG): container finished" podID="f7d3d64c-68b5-430e-9370-32d50d6efaa0" containerID="2db33c692103b6519d0e2d6e7b965a0878eb61dba8a1a28fea254c338c8dae34" exitCode=0 Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.096351 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d4hnz" event={"ID":"f7d3d64c-68b5-430e-9370-32d50d6efaa0","Type":"ContainerDied","Data":"2db33c692103b6519d0e2d6e7b965a0878eb61dba8a1a28fea254c338c8dae34"} Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.107663 4680 generic.go:334] "Generic (PLEG): container finished" podID="a08188de-38b5-4e41-b12d-cf5a1cf488ec" containerID="081ccfa5a385b379a43eb624b861f84d5d9e4a9c6531970f7a7747a93bc4326e" exitCode=0 Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.135761 4680 generic.go:334] "Generic (PLEG): container finished" podID="4b6a027a-aad9-445b-8bab-ac0004a7c565" containerID="e7dfc9b6c735a791e5e2cc87e2682d01f1809c7bdfbf11c2c9ddfb7fa72e8119" exitCode=0 Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.143176 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.145058 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkz99" event={"ID":"de1f714a-f726-49f5-b611-3b0690ed7245","Type":"ContainerStarted","Data":"9503389bb276854d5f5c1fe94188f23fe4913c315b92169ae937130e1e6ffab8"} Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.145100 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdnhz" event={"ID":"a08188de-38b5-4e41-b12d-cf5a1cf488ec","Type":"ContainerDied","Data":"081ccfa5a385b379a43eb624b861f84d5d9e4a9c6531970f7a7747a93bc4326e"} Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.145119 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4spzf" event={"ID":"14614809-63c8-4553-aa29-c129132071b7","Type":"ContainerStarted","Data":"77a5a4732852c667a32656dd72bc2eff4aad01e89db3d827d5d467badaead33f"} Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.145132 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4spzf" event={"ID":"14614809-63c8-4553-aa29-c129132071b7","Type":"ContainerStarted","Data":"8215b1596ac0e40c2b67f81d5ce1ac6bdf47a0224965e436534de47ef4cb03a4"} Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.145142 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2a8d880a-36c0-43cc-a504-ce61b6ea17ca","Type":"ContainerStarted","Data":"b3f5843677bea80cad2a1743c13e6e400d32c2b7ce267a146c091ee331d7a649"} Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.145153 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" event={"ID":"4b6a027a-aad9-445b-8bab-ac0004a7c565","Type":"ContainerDied","Data":"e7dfc9b6c735a791e5e2cc87e2682d01f1809c7bdfbf11c2c9ddfb7fa72e8119"} Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.150246 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czdr4" event={"ID":"03756e56-4c50-4e20-91b1-16fdea59c4a0","Type":"ContainerStarted","Data":"0e98dc2eeee3409eec2ac800cbd37e08d99c03fa9b4c4ee92fd33a40063b8850"} Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.151857 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" event={"ID":"1c640310-31cf-4feb-a173-5688042c6b6d","Type":"ContainerStarted","Data":"7cc1a306007aba064311e0e3df1d90d3dff699860a758162b7f27493230e0246"} Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.319771 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pdls7"] Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.322219 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pdls7" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.325654 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.346958 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:41 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:41 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:41 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.347005 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.348611 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pdls7"] Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.425171 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgxdr\" (UniqueName: \"kubernetes.io/projected/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-kube-api-access-mgxdr\") pod \"redhat-operators-pdls7\" (UID: \"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2\") " pod="openshift-marketplace/redhat-operators-pdls7" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.425249 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-catalog-content\") pod \"redhat-operators-pdls7\" (UID: \"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2\") " pod="openshift-marketplace/redhat-operators-pdls7" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.425286 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-utilities\") pod \"redhat-operators-pdls7\" (UID: \"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2\") " pod="openshift-marketplace/redhat-operators-pdls7" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.526421 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgxdr\" (UniqueName: \"kubernetes.io/projected/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-kube-api-access-mgxdr\") pod \"redhat-operators-pdls7\" (UID: \"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2\") " pod="openshift-marketplace/redhat-operators-pdls7" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.526543 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-catalog-content\") pod \"redhat-operators-pdls7\" (UID: \"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2\") " pod="openshift-marketplace/redhat-operators-pdls7" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.526602 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-utilities\") pod \"redhat-operators-pdls7\" (UID: \"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2\") " pod="openshift-marketplace/redhat-operators-pdls7" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.527106 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-catalog-content\") pod \"redhat-operators-pdls7\" (UID: \"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2\") " pod="openshift-marketplace/redhat-operators-pdls7" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.527220 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-utilities\") pod \"redhat-operators-pdls7\" (UID: \"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2\") " pod="openshift-marketplace/redhat-operators-pdls7" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.564341 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgxdr\" (UniqueName: \"kubernetes.io/projected/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-kube-api-access-mgxdr\") pod \"redhat-operators-pdls7\" (UID: \"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2\") " pod="openshift-marketplace/redhat-operators-pdls7" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.645019 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pdls7" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.693265 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-76dxl"] Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.694300 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76dxl" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.709055 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-76dxl"] Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.731169 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a586c719-e0d8-4941-ae16-3034a8f66616-utilities\") pod \"redhat-operators-76dxl\" (UID: \"a586c719-e0d8-4941-ae16-3034a8f66616\") " pod="openshift-marketplace/redhat-operators-76dxl" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.731211 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a586c719-e0d8-4941-ae16-3034a8f66616-catalog-content\") pod \"redhat-operators-76dxl\" (UID: \"a586c719-e0d8-4941-ae16-3034a8f66616\") " pod="openshift-marketplace/redhat-operators-76dxl" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.731246 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntrq5\" (UniqueName: \"kubernetes.io/projected/a586c719-e0d8-4941-ae16-3034a8f66616-kube-api-access-ntrq5\") pod \"redhat-operators-76dxl\" (UID: \"a586c719-e0d8-4941-ae16-3034a8f66616\") " pod="openshift-marketplace/redhat-operators-76dxl" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.832734 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a586c719-e0d8-4941-ae16-3034a8f66616-utilities\") pod \"redhat-operators-76dxl\" (UID: \"a586c719-e0d8-4941-ae16-3034a8f66616\") " pod="openshift-marketplace/redhat-operators-76dxl" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.833826 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a586c719-e0d8-4941-ae16-3034a8f66616-catalog-content\") pod \"redhat-operators-76dxl\" (UID: \"a586c719-e0d8-4941-ae16-3034a8f66616\") " pod="openshift-marketplace/redhat-operators-76dxl" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.833656 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a586c719-e0d8-4941-ae16-3034a8f66616-utilities\") pod \"redhat-operators-76dxl\" (UID: \"a586c719-e0d8-4941-ae16-3034a8f66616\") " pod="openshift-marketplace/redhat-operators-76dxl" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.834280 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a586c719-e0d8-4941-ae16-3034a8f66616-catalog-content\") pod \"redhat-operators-76dxl\" (UID: \"a586c719-e0d8-4941-ae16-3034a8f66616\") " pod="openshift-marketplace/redhat-operators-76dxl" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.833926 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntrq5\" (UniqueName: \"kubernetes.io/projected/a586c719-e0d8-4941-ae16-3034a8f66616-kube-api-access-ntrq5\") pod \"redhat-operators-76dxl\" (UID: \"a586c719-e0d8-4941-ae16-3034a8f66616\") " pod="openshift-marketplace/redhat-operators-76dxl" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.851418 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntrq5\" (UniqueName: \"kubernetes.io/projected/a586c719-e0d8-4941-ae16-3034a8f66616-kube-api-access-ntrq5\") pod \"redhat-operators-76dxl\" (UID: \"a586c719-e0d8-4941-ae16-3034a8f66616\") " pod="openshift-marketplace/redhat-operators-76dxl" Feb 17 17:46:41 crc kubenswrapper[4680]: I0217 17:46:41.980080 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pdls7"] Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.042501 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76dxl" Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.168911 4680 generic.go:334] "Generic (PLEG): container finished" podID="14614809-63c8-4553-aa29-c129132071b7" containerID="77a5a4732852c667a32656dd72bc2eff4aad01e89db3d827d5d467badaead33f" exitCode=0 Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.168981 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4spzf" event={"ID":"14614809-63c8-4553-aa29-c129132071b7","Type":"ContainerDied","Data":"77a5a4732852c667a32656dd72bc2eff4aad01e89db3d827d5d467badaead33f"} Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.267281 4680 generic.go:334] "Generic (PLEG): container finished" podID="2a8d880a-36c0-43cc-a504-ce61b6ea17ca" containerID="a8ec5cbb62bf0a4f120f6f7bbd6c45d2396dc948c8bc848f75df70dfd0064455" exitCode=0 Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.268947 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2a8d880a-36c0-43cc-a504-ce61b6ea17ca","Type":"ContainerDied","Data":"a8ec5cbb62bf0a4f120f6f7bbd6c45d2396dc948c8bc848f75df70dfd0064455"} Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.300037 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" event={"ID":"1c640310-31cf-4feb-a173-5688042c6b6d","Type":"ContainerStarted","Data":"7993783c9a0bb724bc260f68469d98e67a7adf76ae25457aa001f3d42da5ed9d"} Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.301020 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.324821 4680 generic.go:334] "Generic (PLEG): container finished" podID="03756e56-4c50-4e20-91b1-16fdea59c4a0" containerID="183033e40f62f9955d6d43c72080738369f7aee0329f28f4985c340a34c16a5f" exitCode=0 Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.324947 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czdr4" event={"ID":"03756e56-4c50-4e20-91b1-16fdea59c4a0","Type":"ContainerDied","Data":"183033e40f62f9955d6d43c72080738369f7aee0329f28f4985c340a34c16a5f"} Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.345762 4680 generic.go:334] "Generic (PLEG): container finished" podID="de1f714a-f726-49f5-b611-3b0690ed7245" containerID="258a2ce3dc22aab310530bff1446de5aa35aead1a833cfc68dc76b4396b204bd" exitCode=0 Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.345923 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkz99" event={"ID":"de1f714a-f726-49f5-b611-3b0690ed7245","Type":"ContainerDied","Data":"258a2ce3dc22aab310530bff1446de5aa35aead1a833cfc68dc76b4396b204bd"} Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.354131 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:42 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:42 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:42 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.354174 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.364713 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdls7" event={"ID":"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2","Type":"ContainerStarted","Data":"8cf9818629dd52a77c9ba3711a21f0065b2a721c338e64ce90423198b35167ab"} Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.380246 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" podStartSLOduration=111.380209872 podStartE2EDuration="1m51.380209872s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:46:42.348007542 +0000 UTC m=+131.898906221" watchObservedRunningTime="2026-02-17 17:46:42.380209872 +0000 UTC m=+131.931108551" Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.680022 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-76dxl"] Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.752446 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.790743 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6a027a-aad9-445b-8bab-ac0004a7c565-config-volume\") pod \"4b6a027a-aad9-445b-8bab-ac0004a7c565\" (UID: \"4b6a027a-aad9-445b-8bab-ac0004a7c565\") " Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.790812 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6a027a-aad9-445b-8bab-ac0004a7c565-secret-volume\") pod \"4b6a027a-aad9-445b-8bab-ac0004a7c565\" (UID: \"4b6a027a-aad9-445b-8bab-ac0004a7c565\") " Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.790897 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh4rz\" (UniqueName: \"kubernetes.io/projected/4b6a027a-aad9-445b-8bab-ac0004a7c565-kube-api-access-hh4rz\") pod \"4b6a027a-aad9-445b-8bab-ac0004a7c565\" (UID: \"4b6a027a-aad9-445b-8bab-ac0004a7c565\") " Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.792793 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b6a027a-aad9-445b-8bab-ac0004a7c565-config-volume" (OuterVolumeSpecName: "config-volume") pod "4b6a027a-aad9-445b-8bab-ac0004a7c565" (UID: "4b6a027a-aad9-445b-8bab-ac0004a7c565"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.808437 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b6a027a-aad9-445b-8bab-ac0004a7c565-kube-api-access-hh4rz" (OuterVolumeSpecName: "kube-api-access-hh4rz") pod "4b6a027a-aad9-445b-8bab-ac0004a7c565" (UID: "4b6a027a-aad9-445b-8bab-ac0004a7c565"). InnerVolumeSpecName "kube-api-access-hh4rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.813930 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6a027a-aad9-445b-8bab-ac0004a7c565-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4b6a027a-aad9-445b-8bab-ac0004a7c565" (UID: "4b6a027a-aad9-445b-8bab-ac0004a7c565"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.892266 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6a027a-aad9-445b-8bab-ac0004a7c565-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.892332 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6a027a-aad9-445b-8bab-ac0004a7c565-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:46:42 crc kubenswrapper[4680]: I0217 17:46:42.892346 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh4rz\" (UniqueName: \"kubernetes.io/projected/4b6a027a-aad9-445b-8bab-ac0004a7c565-kube-api-access-hh4rz\") on node \"crc\" DevicePath \"\"" Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.352398 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:43 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:43 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:43 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.352462 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.434371 4680 generic.go:334] "Generic (PLEG): container finished" podID="a586c719-e0d8-4941-ae16-3034a8f66616" containerID="f2b00c8fce0898083fd3c2944d197fd85a6ff9915f1d8d8280fd56f8ed8974c2" exitCode=0 Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.434835 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76dxl" event={"ID":"a586c719-e0d8-4941-ae16-3034a8f66616","Type":"ContainerDied","Data":"f2b00c8fce0898083fd3c2944d197fd85a6ff9915f1d8d8280fd56f8ed8974c2"} Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.435054 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76dxl" event={"ID":"a586c719-e0d8-4941-ae16-3034a8f66616","Type":"ContainerStarted","Data":"ac1e9ad1f0b927f9871fb1aaf0edc9b07361765832b11a4efa994b26c4e1e2e8"} Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.459364 4680 generic.go:334] "Generic (PLEG): container finished" podID="736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2" containerID="25093738e65e7ec1c070b998df9d923bc3cd3fb497457a30bac3528df15b0e69" exitCode=0 Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.459497 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdls7" event={"ID":"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2","Type":"ContainerDied","Data":"25093738e65e7ec1c070b998df9d923bc3cd3fb497457a30bac3528df15b0e69"} Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.472484 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.473142 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2" event={"ID":"4b6a027a-aad9-445b-8bab-ac0004a7c565","Type":"ContainerDied","Data":"adf5d571e0c6669a95cabada96254a78386b49409b5b79e5b28c183888d3fbc6"} Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.473176 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adf5d571e0c6669a95cabada96254a78386b49409b5b79e5b28c183888d3fbc6" Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.666385 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.674194 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-dp7b9" Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.876653 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 17:46:43 crc kubenswrapper[4680]: E0217 17:46:43.877331 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6a027a-aad9-445b-8bab-ac0004a7c565" containerName="collect-profiles" Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.877351 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6a027a-aad9-445b-8bab-ac0004a7c565" containerName="collect-profiles" Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.877608 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b6a027a-aad9-445b-8bab-ac0004a7c565" containerName="collect-profiles" Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.878255 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.878385 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.884086 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.884455 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.920627 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/219f0a4e-01b0-41f6-a44d-2db2f66b0ec4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"219f0a4e-01b0-41f6-a44d-2db2f66b0ec4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 17:46:43 crc kubenswrapper[4680]: I0217 17:46:43.920675 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/219f0a4e-01b0-41f6-a44d-2db2f66b0ec4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"219f0a4e-01b0-41f6-a44d-2db2f66b0ec4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 17:46:44 crc kubenswrapper[4680]: I0217 17:46:44.021971 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/219f0a4e-01b0-41f6-a44d-2db2f66b0ec4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"219f0a4e-01b0-41f6-a44d-2db2f66b0ec4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 17:46:44 crc kubenswrapper[4680]: I0217 17:46:44.022025 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/219f0a4e-01b0-41f6-a44d-2db2f66b0ec4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"219f0a4e-01b0-41f6-a44d-2db2f66b0ec4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 17:46:44 crc kubenswrapper[4680]: I0217 17:46:44.022211 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/219f0a4e-01b0-41f6-a44d-2db2f66b0ec4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"219f0a4e-01b0-41f6-a44d-2db2f66b0ec4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 17:46:44 crc kubenswrapper[4680]: I0217 17:46:44.060444 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/219f0a4e-01b0-41f6-a44d-2db2f66b0ec4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"219f0a4e-01b0-41f6-a44d-2db2f66b0ec4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 17:46:44 crc kubenswrapper[4680]: I0217 17:46:44.345745 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:44 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:44 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:44 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:44 crc kubenswrapper[4680]: I0217 17:46:44.345796 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:44 crc kubenswrapper[4680]: I0217 17:46:44.355429 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 17:46:44 crc kubenswrapper[4680]: I0217 17:46:44.406091 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 17:46:44 crc kubenswrapper[4680]: I0217 17:46:44.537921 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a8d880a-36c0-43cc-a504-ce61b6ea17ca-kubelet-dir\") pod \"2a8d880a-36c0-43cc-a504-ce61b6ea17ca\" (UID: \"2a8d880a-36c0-43cc-a504-ce61b6ea17ca\") " Feb 17 17:46:44 crc kubenswrapper[4680]: I0217 17:46:44.538058 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a8d880a-36c0-43cc-a504-ce61b6ea17ca-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2a8d880a-36c0-43cc-a504-ce61b6ea17ca" (UID: "2a8d880a-36c0-43cc-a504-ce61b6ea17ca"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:46:44 crc kubenswrapper[4680]: I0217 17:46:44.538345 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a8d880a-36c0-43cc-a504-ce61b6ea17ca-kube-api-access\") pod \"2a8d880a-36c0-43cc-a504-ce61b6ea17ca\" (UID: \"2a8d880a-36c0-43cc-a504-ce61b6ea17ca\") " Feb 17 17:46:44 crc kubenswrapper[4680]: I0217 17:46:44.538637 4680 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a8d880a-36c0-43cc-a504-ce61b6ea17ca-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 17:46:44 crc kubenswrapper[4680]: I0217 17:46:44.562830 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a8d880a-36c0-43cc-a504-ce61b6ea17ca-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2a8d880a-36c0-43cc-a504-ce61b6ea17ca" (UID: "2a8d880a-36c0-43cc-a504-ce61b6ea17ca"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:46:44 crc kubenswrapper[4680]: I0217 17:46:44.640558 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a8d880a-36c0-43cc-a504-ce61b6ea17ca-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 17:46:44 crc kubenswrapper[4680]: I0217 17:46:44.673594 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2a8d880a-36c0-43cc-a504-ce61b6ea17ca","Type":"ContainerDied","Data":"b3f5843677bea80cad2a1743c13e6e400d32c2b7ce267a146c091ee331d7a649"} Feb 17 17:46:44 crc kubenswrapper[4680]: I0217 17:46:44.673666 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3f5843677bea80cad2a1743c13e6e400d32c2b7ce267a146c091ee331d7a649" Feb 17 17:46:44 crc kubenswrapper[4680]: I0217 17:46:44.673746 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 17:46:45 crc kubenswrapper[4680]: I0217 17:46:45.093464 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-vjddm" Feb 17 17:46:45 crc kubenswrapper[4680]: I0217 17:46:45.169755 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 17:46:45 crc kubenswrapper[4680]: W0217 17:46:45.187040 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod219f0a4e_01b0_41f6_a44d_2db2f66b0ec4.slice/crio-fc1e14e7b6cfb25d34d38c1ae468ac6d91eac1a74e8ac35043f151613336a629 WatchSource:0}: Error finding container fc1e14e7b6cfb25d34d38c1ae468ac6d91eac1a74e8ac35043f151613336a629: Status 404 returned error can't find the container with id fc1e14e7b6cfb25d34d38c1ae468ac6d91eac1a74e8ac35043f151613336a629 Feb 17 17:46:45 crc kubenswrapper[4680]: I0217 17:46:45.347219 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:45 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:45 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:45 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:45 crc kubenswrapper[4680]: I0217 17:46:45.347348 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:46 crc kubenswrapper[4680]: I0217 17:46:46.155595 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"219f0a4e-01b0-41f6-a44d-2db2f66b0ec4","Type":"ContainerStarted","Data":"fc1e14e7b6cfb25d34d38c1ae468ac6d91eac1a74e8ac35043f151613336a629"} Feb 17 17:46:46 crc kubenswrapper[4680]: I0217 17:46:46.762636 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:46 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:46 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:46 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:46 crc kubenswrapper[4680]: I0217 17:46:46.762687 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:47 crc kubenswrapper[4680]: I0217 17:46:47.182588 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"219f0a4e-01b0-41f6-a44d-2db2f66b0ec4","Type":"ContainerStarted","Data":"80522ea75e18068d0e915f0d974b04a9f9255ea858cb124601cd0ce1484a97e3"} Feb 17 17:46:47 crc kubenswrapper[4680]: I0217 17:46:47.344119 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:47 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:47 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:47 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:47 crc kubenswrapper[4680]: I0217 17:46:47.344186 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:48 crc kubenswrapper[4680]: I0217 17:46:48.191305 4680 generic.go:334] "Generic (PLEG): container finished" podID="219f0a4e-01b0-41f6-a44d-2db2f66b0ec4" containerID="80522ea75e18068d0e915f0d974b04a9f9255ea858cb124601cd0ce1484a97e3" exitCode=0 Feb 17 17:46:48 crc kubenswrapper[4680]: I0217 17:46:48.191447 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"219f0a4e-01b0-41f6-a44d-2db2f66b0ec4","Type":"ContainerDied","Data":"80522ea75e18068d0e915f0d974b04a9f9255ea858cb124601cd0ce1484a97e3"} Feb 17 17:46:48 crc kubenswrapper[4680]: I0217 17:46:48.343394 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:48 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:48 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:48 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:48 crc kubenswrapper[4680]: I0217 17:46:48.343787 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:48 crc kubenswrapper[4680]: I0217 17:46:48.583286 4680 patch_prober.go:28] interesting pod/downloads-7954f5f757-z9fcm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 17 17:46:48 crc kubenswrapper[4680]: I0217 17:46:48.583429 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-z9fcm" podUID="8e87e04c-2d08-4f13-8b5a-db27f849132e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 17 17:46:48 crc kubenswrapper[4680]: I0217 17:46:48.586018 4680 patch_prober.go:28] interesting pod/downloads-7954f5f757-z9fcm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 17 17:46:48 crc kubenswrapper[4680]: I0217 17:46:48.586083 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z9fcm" podUID="8e87e04c-2d08-4f13-8b5a-db27f849132e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 17 17:46:49 crc kubenswrapper[4680]: I0217 17:46:49.347554 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:49 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:49 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:49 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:49 crc kubenswrapper[4680]: I0217 17:46:49.347621 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:49 crc kubenswrapper[4680]: I0217 17:46:49.635959 4680 patch_prober.go:28] interesting pod/console-f9d7485db-t9gdn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 17 17:46:49 crc kubenswrapper[4680]: I0217 17:46:49.636015 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-t9gdn" podUID="ed1c57d6-3991-4ebb-9149-63e38e19dc40" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 17 17:46:49 crc kubenswrapper[4680]: I0217 17:46:49.764893 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 17:46:49 crc kubenswrapper[4680]: I0217 17:46:49.870732 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/219f0a4e-01b0-41f6-a44d-2db2f66b0ec4-kubelet-dir\") pod \"219f0a4e-01b0-41f6-a44d-2db2f66b0ec4\" (UID: \"219f0a4e-01b0-41f6-a44d-2db2f66b0ec4\") " Feb 17 17:46:49 crc kubenswrapper[4680]: I0217 17:46:49.870822 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/219f0a4e-01b0-41f6-a44d-2db2f66b0ec4-kube-api-access\") pod \"219f0a4e-01b0-41f6-a44d-2db2f66b0ec4\" (UID: \"219f0a4e-01b0-41f6-a44d-2db2f66b0ec4\") " Feb 17 17:46:49 crc kubenswrapper[4680]: I0217 17:46:49.870884 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/219f0a4e-01b0-41f6-a44d-2db2f66b0ec4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "219f0a4e-01b0-41f6-a44d-2db2f66b0ec4" (UID: "219f0a4e-01b0-41f6-a44d-2db2f66b0ec4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:46:49 crc kubenswrapper[4680]: I0217 17:46:49.871200 4680 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/219f0a4e-01b0-41f6-a44d-2db2f66b0ec4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 17:46:49 crc kubenswrapper[4680]: I0217 17:46:49.892987 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/219f0a4e-01b0-41f6-a44d-2db2f66b0ec4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "219f0a4e-01b0-41f6-a44d-2db2f66b0ec4" (UID: "219f0a4e-01b0-41f6-a44d-2db2f66b0ec4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:46:49 crc kubenswrapper[4680]: I0217 17:46:49.972217 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/219f0a4e-01b0-41f6-a44d-2db2f66b0ec4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 17:46:50 crc kubenswrapper[4680]: I0217 17:46:50.333922 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"219f0a4e-01b0-41f6-a44d-2db2f66b0ec4","Type":"ContainerDied","Data":"fc1e14e7b6cfb25d34d38c1ae468ac6d91eac1a74e8ac35043f151613336a629"} Feb 17 17:46:50 crc kubenswrapper[4680]: I0217 17:46:50.333969 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc1e14e7b6cfb25d34d38c1ae468ac6d91eac1a74e8ac35043f151613336a629" Feb 17 17:46:50 crc kubenswrapper[4680]: I0217 17:46:50.334021 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 17:46:50 crc kubenswrapper[4680]: I0217 17:46:50.354504 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:50 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:50 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:50 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:50 crc kubenswrapper[4680]: I0217 17:46:50.354557 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:51 crc kubenswrapper[4680]: I0217 17:46:51.344011 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:51 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:51 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:51 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:51 crc kubenswrapper[4680]: I0217 17:46:51.344115 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:52 crc kubenswrapper[4680]: I0217 17:46:52.346637 4680 patch_prober.go:28] interesting pod/router-default-5444994796-nkxs4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 17:46:52 crc kubenswrapper[4680]: [-]has-synced failed: reason withheld Feb 17 17:46:52 crc kubenswrapper[4680]: [+]process-running ok Feb 17 17:46:52 crc kubenswrapper[4680]: healthz check failed Feb 17 17:46:52 crc kubenswrapper[4680]: I0217 17:46:52.346864 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nkxs4" podUID="aa6ffa49-187a-4a3d-9ab1-e798d1e0dfab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 17:46:53 crc kubenswrapper[4680]: I0217 17:46:53.354582 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:53 crc kubenswrapper[4680]: I0217 17:46:53.357664 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-nkxs4" Feb 17 17:46:55 crc kubenswrapper[4680]: I0217 17:46:55.621827 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:46:58 crc kubenswrapper[4680]: I0217 17:46:58.214956 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:58 crc kubenswrapper[4680]: I0217 17:46:58.215308 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:46:58 crc kubenswrapper[4680]: I0217 17:46:58.215443 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:58 crc kubenswrapper[4680]: I0217 17:46:58.215561 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:46:58 crc kubenswrapper[4680]: I0217 17:46:58.218192 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 17:46:58 crc kubenswrapper[4680]: I0217 17:46:58.220023 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 17:46:58 crc kubenswrapper[4680]: I0217 17:46:58.220055 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 17:46:58 crc kubenswrapper[4680]: I0217 17:46:58.227153 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:58 crc kubenswrapper[4680]: I0217 17:46:58.227862 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 17:46:58 crc kubenswrapper[4680]: I0217 17:46:58.240245 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:46:58 crc kubenswrapper[4680]: I0217 17:46:58.241257 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:46:58 crc kubenswrapper[4680]: I0217 17:46:58.248059 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:58 crc kubenswrapper[4680]: I0217 17:46:58.333950 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 17:46:58 crc kubenswrapper[4680]: I0217 17:46:58.340720 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:46:58 crc kubenswrapper[4680]: I0217 17:46:58.347038 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 17:46:58 crc kubenswrapper[4680]: I0217 17:46:58.595991 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-z9fcm" Feb 17 17:46:59 crc kubenswrapper[4680]: I0217 17:46:59.781127 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:46:59 crc kubenswrapper[4680]: I0217 17:46:59.787347 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:47:00 crc kubenswrapper[4680]: I0217 17:47:00.700762 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:47:05 crc kubenswrapper[4680]: I0217 17:47:05.588489 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:47:05 crc kubenswrapper[4680]: I0217 17:47:05.588796 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:47:09 crc kubenswrapper[4680]: I0217 17:47:09.740722 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lsp6v" Feb 17 17:47:10 crc kubenswrapper[4680]: I0217 17:47:10.564993 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"04bd70d8b116df903c3d7db62b630192acd21bcbda340c4bf8e4178d1671eee6"} Feb 17 17:47:10 crc kubenswrapper[4680]: I0217 17:47:10.566028 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d526648132f67bc5ea52f3f0f1ab962e77f56c9bcc17eaf48d065531468b45d6"} Feb 17 17:47:12 crc kubenswrapper[4680]: I0217 17:47:12.923881 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs\") pod \"network-metrics-daemon-psx9f\" (UID: \"1270b10e-8243-4940-b8a1-e0d36a0ef933\") " pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:47:12 crc kubenswrapper[4680]: I0217 17:47:12.925908 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 17:47:12 crc kubenswrapper[4680]: I0217 17:47:12.942341 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1270b10e-8243-4940-b8a1-e0d36a0ef933-metrics-certs\") pod \"network-metrics-daemon-psx9f\" (UID: \"1270b10e-8243-4940-b8a1-e0d36a0ef933\") " pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:47:13 crc kubenswrapper[4680]: I0217 17:47:13.118911 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 17:47:13 crc kubenswrapper[4680]: I0217 17:47:13.127959 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-psx9f" Feb 17 17:47:15 crc kubenswrapper[4680]: I0217 17:47:15.599449 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qxkds"] Feb 17 17:47:15 crc kubenswrapper[4680]: W0217 17:47:15.867046 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-682f2bbad17a61926793290cfe37b7512f74b710f5e9947872ecb1b216f7854f WatchSource:0}: Error finding container 682f2bbad17a61926793290cfe37b7512f74b710f5e9947872ecb1b216f7854f: Status 404 returned error can't find the container with id 682f2bbad17a61926793290cfe37b7512f74b710f5e9947872ecb1b216f7854f Feb 17 17:47:16 crc kubenswrapper[4680]: I0217 17:47:16.620205 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"682f2bbad17a61926793290cfe37b7512f74b710f5e9947872ecb1b216f7854f"} Feb 17 17:47:17 crc kubenswrapper[4680]: I0217 17:47:17.977103 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 17:47:17 crc kubenswrapper[4680]: E0217 17:47:17.979574 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219f0a4e-01b0-41f6-a44d-2db2f66b0ec4" containerName="pruner" Feb 17 17:47:17 crc kubenswrapper[4680]: I0217 17:47:17.979587 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="219f0a4e-01b0-41f6-a44d-2db2f66b0ec4" containerName="pruner" Feb 17 17:47:17 crc kubenswrapper[4680]: E0217 17:47:17.979597 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a8d880a-36c0-43cc-a504-ce61b6ea17ca" containerName="pruner" Feb 17 17:47:17 crc kubenswrapper[4680]: I0217 17:47:17.979602 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a8d880a-36c0-43cc-a504-ce61b6ea17ca" containerName="pruner" Feb 17 17:47:17 crc kubenswrapper[4680]: I0217 17:47:17.979708 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a8d880a-36c0-43cc-a504-ce61b6ea17ca" containerName="pruner" Feb 17 17:47:17 crc kubenswrapper[4680]: I0217 17:47:17.979725 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="219f0a4e-01b0-41f6-a44d-2db2f66b0ec4" containerName="pruner" Feb 17 17:47:17 crc kubenswrapper[4680]: I0217 17:47:17.980105 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 17:47:17 crc kubenswrapper[4680]: I0217 17:47:17.983666 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 17:47:17 crc kubenswrapper[4680]: I0217 17:47:17.983863 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 17:47:17 crc kubenswrapper[4680]: I0217 17:47:17.986607 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 17:47:18 crc kubenswrapper[4680]: I0217 17:47:18.096123 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a05b11fb-4342-4ed8-afcd-beee4fdc64c3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a05b11fb-4342-4ed8-afcd-beee4fdc64c3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 17:47:18 crc kubenswrapper[4680]: I0217 17:47:18.096226 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a05b11fb-4342-4ed8-afcd-beee4fdc64c3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a05b11fb-4342-4ed8-afcd-beee4fdc64c3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 17:47:18 crc kubenswrapper[4680]: I0217 17:47:18.197219 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a05b11fb-4342-4ed8-afcd-beee4fdc64c3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a05b11fb-4342-4ed8-afcd-beee4fdc64c3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 17:47:18 crc kubenswrapper[4680]: I0217 17:47:18.197356 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a05b11fb-4342-4ed8-afcd-beee4fdc64c3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a05b11fb-4342-4ed8-afcd-beee4fdc64c3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 17:47:18 crc kubenswrapper[4680]: I0217 17:47:18.197434 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a05b11fb-4342-4ed8-afcd-beee4fdc64c3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a05b11fb-4342-4ed8-afcd-beee4fdc64c3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 17:47:18 crc kubenswrapper[4680]: I0217 17:47:18.215728 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a05b11fb-4342-4ed8-afcd-beee4fdc64c3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a05b11fb-4342-4ed8-afcd-beee4fdc64c3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 17:47:18 crc kubenswrapper[4680]: I0217 17:47:18.306749 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 17:47:18 crc kubenswrapper[4680]: E0217 17:47:18.953852 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 17 17:47:18 crc kubenswrapper[4680]: E0217 17:47:18.954023 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfmq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6hgx6_openshift-marketplace(6ea8ef6d-a9c6-477e-89be-fa625c176d42): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 17:47:18 crc kubenswrapper[4680]: E0217 17:47:18.955227 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6hgx6" podUID="6ea8ef6d-a9c6-477e-89be-fa625c176d42" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.327558 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6hgx6" podUID="6ea8ef6d-a9c6-477e-89be-fa625c176d42" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.398882 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.399213 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6w446,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-czdr4_openshift-marketplace(03756e56-4c50-4e20-91b1-16fdea59c4a0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.401953 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-czdr4" podUID="03756e56-4c50-4e20-91b1-16fdea59c4a0" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.424816 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.424997 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n8tqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-d4hnz_openshift-marketplace(f7d3d64c-68b5-430e-9370-32d50d6efaa0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.426240 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-d4hnz" podUID="f7d3d64c-68b5-430e-9370-32d50d6efaa0" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.580574 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.580964 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntrq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-76dxl_openshift-marketplace(a586c719-e0d8-4941-ae16-3034a8f66616): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.582842 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-76dxl" podUID="a586c719-e0d8-4941-ae16-3034a8f66616" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.647499 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.647627 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szfwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4spzf_openshift-marketplace(14614809-63c8-4553-aa29-c129132071b7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.647865 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-czdr4" podUID="03756e56-4c50-4e20-91b1-16fdea59c4a0" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.649492 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4spzf" podUID="14614809-63c8-4553-aa29-c129132071b7" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.653221 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-76dxl" podUID="a586c719-e0d8-4941-ae16-3034a8f66616" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.657402 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-d4hnz" podUID="f7d3d64c-68b5-430e-9370-32d50d6efaa0" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.659415 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.659528 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-296x8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kkz99_openshift-marketplace(de1f714a-f726-49f5-b611-3b0690ed7245): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 17:47:20 crc kubenswrapper[4680]: E0217 17:47:20.662242 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kkz99" podUID="de1f714a-f726-49f5-b611-3b0690ed7245" Feb 17 17:47:20 crc kubenswrapper[4680]: I0217 17:47:20.855669 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 17:47:20 crc kubenswrapper[4680]: W0217 17:47:20.859691 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda05b11fb_4342_4ed8_afcd_beee4fdc64c3.slice/crio-cf1869c4959c2497fc00180e3622775da9078eaa87c1e8f73f594b82d5714d2e WatchSource:0}: Error finding container cf1869c4959c2497fc00180e3622775da9078eaa87c1e8f73f594b82d5714d2e: Status 404 returned error can't find the container with id cf1869c4959c2497fc00180e3622775da9078eaa87c1e8f73f594b82d5714d2e Feb 17 17:47:21 crc kubenswrapper[4680]: I0217 17:47:21.027302 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-psx9f"] Feb 17 17:47:21 crc kubenswrapper[4680]: W0217 17:47:21.041791 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1270b10e_8243_4940_b8a1_e0d36a0ef933.slice/crio-7f2343c67e04f1c405baefbc07ab4d6f0c31836db6506fc959aef2cef934aa45 WatchSource:0}: Error finding container 7f2343c67e04f1c405baefbc07ab4d6f0c31836db6506fc959aef2cef934aa45: Status 404 returned error can't find the container with id 7f2343c67e04f1c405baefbc07ab4d6f0c31836db6506fc959aef2cef934aa45 Feb 17 17:47:21 crc kubenswrapper[4680]: I0217 17:47:21.650714 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-psx9f" event={"ID":"1270b10e-8243-4940-b8a1-e0d36a0ef933","Type":"ContainerStarted","Data":"0671d1475c87a7604497bad91cd18e21d355d8bc2d98d0cbb42b38184da6117c"} Feb 17 17:47:21 crc kubenswrapper[4680]: I0217 17:47:21.650781 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-psx9f" event={"ID":"1270b10e-8243-4940-b8a1-e0d36a0ef933","Type":"ContainerStarted","Data":"7f2343c67e04f1c405baefbc07ab4d6f0c31836db6506fc959aef2cef934aa45"} Feb 17 17:47:21 crc kubenswrapper[4680]: I0217 17:47:21.651853 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"164f5017e270675be1d4ed7a18bef7fdba698520d85355b9b28ded3583e6846c"} Feb 17 17:47:21 crc kubenswrapper[4680]: I0217 17:47:21.654599 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"03fcfaddc89641b6fc65933e969f7a99711b33f07ee2525f3b7f53f7be597287"} Feb 17 17:47:21 crc kubenswrapper[4680]: I0217 17:47:21.655209 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:47:21 crc kubenswrapper[4680]: I0217 17:47:21.657017 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a05b11fb-4342-4ed8-afcd-beee4fdc64c3","Type":"ContainerStarted","Data":"81d946b950505002780f778bdb7f49da6f4e65b003fe9d6e4bee7cc81bcd0d98"} Feb 17 17:47:21 crc kubenswrapper[4680]: I0217 17:47:21.657051 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a05b11fb-4342-4ed8-afcd-beee4fdc64c3","Type":"ContainerStarted","Data":"cf1869c4959c2497fc00180e3622775da9078eaa87c1e8f73f594b82d5714d2e"} Feb 17 17:47:21 crc kubenswrapper[4680]: I0217 17:47:21.658243 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f9337d4d6d80fabd1adfc9ed1399f8b6d17a993edc41be4c816bf86be7feb319"} Feb 17 17:47:21 crc kubenswrapper[4680]: I0217 17:47:21.661463 4680 generic.go:334] "Generic (PLEG): container finished" podID="a08188de-38b5-4e41-b12d-cf5a1cf488ec" containerID="329e12520898e331c29c9e44b308f7992ccbd1f80dd8b2b25936502ab70b67c9" exitCode=0 Feb 17 17:47:21 crc kubenswrapper[4680]: I0217 17:47:21.661518 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdnhz" event={"ID":"a08188de-38b5-4e41-b12d-cf5a1cf488ec","Type":"ContainerDied","Data":"329e12520898e331c29c9e44b308f7992ccbd1f80dd8b2b25936502ab70b67c9"} Feb 17 17:47:21 crc kubenswrapper[4680]: I0217 17:47:21.664175 4680 generic.go:334] "Generic (PLEG): container finished" podID="736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2" containerID="fc402ed79bce9a7192bcdb7048d26aec458a57d6297aecaa252ee684f7cef196" exitCode=0 Feb 17 17:47:21 crc kubenswrapper[4680]: I0217 17:47:21.664217 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdls7" event={"ID":"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2","Type":"ContainerDied","Data":"fc402ed79bce9a7192bcdb7048d26aec458a57d6297aecaa252ee684f7cef196"} Feb 17 17:47:21 crc kubenswrapper[4680]: E0217 17:47:21.666093 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kkz99" podUID="de1f714a-f726-49f5-b611-3b0690ed7245" Feb 17 17:47:21 crc kubenswrapper[4680]: E0217 17:47:21.672958 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4spzf" podUID="14614809-63c8-4553-aa29-c129132071b7" Feb 17 17:47:22 crc kubenswrapper[4680]: I0217 17:47:22.674607 4680 generic.go:334] "Generic (PLEG): container finished" podID="a05b11fb-4342-4ed8-afcd-beee4fdc64c3" containerID="81d946b950505002780f778bdb7f49da6f4e65b003fe9d6e4bee7cc81bcd0d98" exitCode=0 Feb 17 17:47:22 crc kubenswrapper[4680]: I0217 17:47:22.674703 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a05b11fb-4342-4ed8-afcd-beee4fdc64c3","Type":"ContainerDied","Data":"81d946b950505002780f778bdb7f49da6f4e65b003fe9d6e4bee7cc81bcd0d98"} Feb 17 17:47:22 crc kubenswrapper[4680]: I0217 17:47:22.677530 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-psx9f" event={"ID":"1270b10e-8243-4940-b8a1-e0d36a0ef933","Type":"ContainerStarted","Data":"c7b00af5f5bc2c878fc46a0b1fbc3406395b6cd03465465cc412ff603d3cba25"} Feb 17 17:47:22 crc kubenswrapper[4680]: I0217 17:47:22.736790 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-psx9f" podStartSLOduration=151.736772866 podStartE2EDuration="2m31.736772866s" podCreationTimestamp="2026-02-17 17:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:47:22.735139785 +0000 UTC m=+172.286038464" watchObservedRunningTime="2026-02-17 17:47:22.736772866 +0000 UTC m=+172.287671555" Feb 17 17:47:22 crc kubenswrapper[4680]: I0217 17:47:22.975793 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 17:47:22 crc kubenswrapper[4680]: I0217 17:47:22.976666 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 17:47:22 crc kubenswrapper[4680]: I0217 17:47:22.987227 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 17:47:23 crc kubenswrapper[4680]: I0217 17:47:23.059021 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95c5ebe9-31a5-487f-a8a9-923d79843be0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"95c5ebe9-31a5-487f-a8a9-923d79843be0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 17:47:23 crc kubenswrapper[4680]: I0217 17:47:23.059064 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95c5ebe9-31a5-487f-a8a9-923d79843be0-kube-api-access\") pod \"installer-9-crc\" (UID: \"95c5ebe9-31a5-487f-a8a9-923d79843be0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 17:47:23 crc kubenswrapper[4680]: I0217 17:47:23.059090 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95c5ebe9-31a5-487f-a8a9-923d79843be0-var-lock\") pod \"installer-9-crc\" (UID: \"95c5ebe9-31a5-487f-a8a9-923d79843be0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 17:47:23 crc kubenswrapper[4680]: I0217 17:47:23.161138 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95c5ebe9-31a5-487f-a8a9-923d79843be0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"95c5ebe9-31a5-487f-a8a9-923d79843be0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 17:47:23 crc kubenswrapper[4680]: I0217 17:47:23.161176 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95c5ebe9-31a5-487f-a8a9-923d79843be0-kube-api-access\") pod \"installer-9-crc\" (UID: \"95c5ebe9-31a5-487f-a8a9-923d79843be0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 17:47:23 crc kubenswrapper[4680]: I0217 17:47:23.161204 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95c5ebe9-31a5-487f-a8a9-923d79843be0-var-lock\") pod \"installer-9-crc\" (UID: \"95c5ebe9-31a5-487f-a8a9-923d79843be0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 17:47:23 crc kubenswrapper[4680]: I0217 17:47:23.161300 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95c5ebe9-31a5-487f-a8a9-923d79843be0-var-lock\") pod \"installer-9-crc\" (UID: \"95c5ebe9-31a5-487f-a8a9-923d79843be0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 17:47:23 crc kubenswrapper[4680]: I0217 17:47:23.161363 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95c5ebe9-31a5-487f-a8a9-923d79843be0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"95c5ebe9-31a5-487f-a8a9-923d79843be0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 17:47:23 crc kubenswrapper[4680]: I0217 17:47:23.187302 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95c5ebe9-31a5-487f-a8a9-923d79843be0-kube-api-access\") pod \"installer-9-crc\" (UID: \"95c5ebe9-31a5-487f-a8a9-923d79843be0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 17:47:23 crc kubenswrapper[4680]: I0217 17:47:23.300566 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 17:47:23 crc kubenswrapper[4680]: I0217 17:47:23.732042 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 17:47:24 crc kubenswrapper[4680]: I0217 17:47:24.014490 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 17:47:24 crc kubenswrapper[4680]: I0217 17:47:24.071893 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a05b11fb-4342-4ed8-afcd-beee4fdc64c3-kube-api-access\") pod \"a05b11fb-4342-4ed8-afcd-beee4fdc64c3\" (UID: \"a05b11fb-4342-4ed8-afcd-beee4fdc64c3\") " Feb 17 17:47:24 crc kubenswrapper[4680]: I0217 17:47:24.073288 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a05b11fb-4342-4ed8-afcd-beee4fdc64c3-kubelet-dir\") pod \"a05b11fb-4342-4ed8-afcd-beee4fdc64c3\" (UID: \"a05b11fb-4342-4ed8-afcd-beee4fdc64c3\") " Feb 17 17:47:24 crc kubenswrapper[4680]: I0217 17:47:24.073611 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a05b11fb-4342-4ed8-afcd-beee4fdc64c3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a05b11fb-4342-4ed8-afcd-beee4fdc64c3" (UID: "a05b11fb-4342-4ed8-afcd-beee4fdc64c3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:47:24 crc kubenswrapper[4680]: I0217 17:47:24.073811 4680 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a05b11fb-4342-4ed8-afcd-beee4fdc64c3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:24 crc kubenswrapper[4680]: I0217 17:47:24.077957 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a05b11fb-4342-4ed8-afcd-beee4fdc64c3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a05b11fb-4342-4ed8-afcd-beee4fdc64c3" (UID: "a05b11fb-4342-4ed8-afcd-beee4fdc64c3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:47:24 crc kubenswrapper[4680]: I0217 17:47:24.174722 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a05b11fb-4342-4ed8-afcd-beee4fdc64c3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:24 crc kubenswrapper[4680]: I0217 17:47:24.687381 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"95c5ebe9-31a5-487f-a8a9-923d79843be0","Type":"ContainerStarted","Data":"3475cae537a9c8781391bb5a87e2d60d9721ee10855b50b8a01f27a14970fafe"} Feb 17 17:47:24 crc kubenswrapper[4680]: I0217 17:47:24.688047 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"95c5ebe9-31a5-487f-a8a9-923d79843be0","Type":"ContainerStarted","Data":"d796506c029662e6007964aac2067f88a93f892bb4cd670c96eb6084f56b7c31"} Feb 17 17:47:24 crc kubenswrapper[4680]: I0217 17:47:24.689058 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a05b11fb-4342-4ed8-afcd-beee4fdc64c3","Type":"ContainerDied","Data":"cf1869c4959c2497fc00180e3622775da9078eaa87c1e8f73f594b82d5714d2e"} Feb 17 17:47:24 crc kubenswrapper[4680]: I0217 17:47:24.689095 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf1869c4959c2497fc00180e3622775da9078eaa87c1e8f73f594b82d5714d2e" Feb 17 17:47:24 crc kubenswrapper[4680]: I0217 17:47:24.689274 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 17:47:24 crc kubenswrapper[4680]: I0217 17:47:24.708558 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.708540647 podStartE2EDuration="2.708540647s" podCreationTimestamp="2026-02-17 17:47:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:47:24.704888592 +0000 UTC m=+174.255787271" watchObservedRunningTime="2026-02-17 17:47:24.708540647 +0000 UTC m=+174.259439326" Feb 17 17:47:25 crc kubenswrapper[4680]: I0217 17:47:25.697510 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdnhz" event={"ID":"a08188de-38b5-4e41-b12d-cf5a1cf488ec","Type":"ContainerStarted","Data":"52b5f59888665260200b06dfd5b0a8020169e76df420785a5dc5a0dc18bc0d62"} Feb 17 17:47:25 crc kubenswrapper[4680]: I0217 17:47:25.716948 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qdnhz" podStartSLOduration=3.945565804 podStartE2EDuration="47.716930604s" podCreationTimestamp="2026-02-17 17:46:38 +0000 UTC" firstStartedPulling="2026-02-17 17:46:41.120054697 +0000 UTC m=+130.670953366" lastFinishedPulling="2026-02-17 17:47:24.891419477 +0000 UTC m=+174.442318166" observedRunningTime="2026-02-17 17:47:25.713173777 +0000 UTC m=+175.264072456" watchObservedRunningTime="2026-02-17 17:47:25.716930604 +0000 UTC m=+175.267829283" Feb 17 17:47:26 crc kubenswrapper[4680]: I0217 17:47:26.704769 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdls7" event={"ID":"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2","Type":"ContainerStarted","Data":"4f639cdd2095a72a6ca92aed5a93b145ffa085c045290fda00ea10e25a4ac225"} Feb 17 17:47:26 crc kubenswrapper[4680]: I0217 17:47:26.724211 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pdls7" podStartSLOduration=3.893487369 podStartE2EDuration="45.724194288s" podCreationTimestamp="2026-02-17 17:46:41 +0000 UTC" firstStartedPulling="2026-02-17 17:46:43.469656437 +0000 UTC m=+133.020555116" lastFinishedPulling="2026-02-17 17:47:25.300363356 +0000 UTC m=+174.851262035" observedRunningTime="2026-02-17 17:47:26.722787623 +0000 UTC m=+176.273686302" watchObservedRunningTime="2026-02-17 17:47:26.724194288 +0000 UTC m=+176.275092967" Feb 17 17:47:29 crc kubenswrapper[4680]: I0217 17:47:29.312003 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qdnhz" Feb 17 17:47:29 crc kubenswrapper[4680]: I0217 17:47:29.312579 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qdnhz" Feb 17 17:47:29 crc kubenswrapper[4680]: I0217 17:47:29.506679 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qdnhz" Feb 17 17:47:30 crc kubenswrapper[4680]: I0217 17:47:30.785931 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qdnhz" Feb 17 17:47:31 crc kubenswrapper[4680]: I0217 17:47:31.645497 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pdls7" Feb 17 17:47:31 crc kubenswrapper[4680]: I0217 17:47:31.645601 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pdls7" Feb 17 17:47:32 crc kubenswrapper[4680]: I0217 17:47:32.689964 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pdls7" podUID="736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2" containerName="registry-server" probeResult="failure" output=< Feb 17 17:47:32 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 17:47:32 crc kubenswrapper[4680]: > Feb 17 17:47:33 crc kubenswrapper[4680]: I0217 17:47:33.751070 4680 generic.go:334] "Generic (PLEG): container finished" podID="03756e56-4c50-4e20-91b1-16fdea59c4a0" containerID="e1a558590d632968b5ff6c913eb7cfacf892c6cfeb44f4d02696c293c106c6ea" exitCode=0 Feb 17 17:47:33 crc kubenswrapper[4680]: I0217 17:47:33.751155 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czdr4" event={"ID":"03756e56-4c50-4e20-91b1-16fdea59c4a0","Type":"ContainerDied","Data":"e1a558590d632968b5ff6c913eb7cfacf892c6cfeb44f4d02696c293c106c6ea"} Feb 17 17:47:34 crc kubenswrapper[4680]: I0217 17:47:34.758357 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czdr4" event={"ID":"03756e56-4c50-4e20-91b1-16fdea59c4a0","Type":"ContainerStarted","Data":"635f3aa57cb48054c99f8a6c2afa4e46ac1c88c8fd493cb98c8f04e6ae018816"} Feb 17 17:47:34 crc kubenswrapper[4680]: I0217 17:47:34.781489 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-czdr4" podStartSLOduration=2.9612805939999998 podStartE2EDuration="54.781473202s" podCreationTimestamp="2026-02-17 17:46:40 +0000 UTC" firstStartedPulling="2026-02-17 17:46:42.327231116 +0000 UTC m=+131.878129795" lastFinishedPulling="2026-02-17 17:47:34.147423724 +0000 UTC m=+183.698322403" observedRunningTime="2026-02-17 17:47:34.778277201 +0000 UTC m=+184.329175890" watchObservedRunningTime="2026-02-17 17:47:34.781473202 +0000 UTC m=+184.332371881" Feb 17 17:47:35 crc kubenswrapper[4680]: I0217 17:47:35.588300 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:47:35 crc kubenswrapper[4680]: I0217 17:47:35.588396 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:47:35 crc kubenswrapper[4680]: I0217 17:47:35.767265 4680 generic.go:334] "Generic (PLEG): container finished" podID="f7d3d64c-68b5-430e-9370-32d50d6efaa0" containerID="c3e30a3469821ec5b3e1f5dfb90244227bb74b1319f67275b5eb46cd843e90a0" exitCode=0 Feb 17 17:47:35 crc kubenswrapper[4680]: I0217 17:47:35.767359 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d4hnz" event={"ID":"f7d3d64c-68b5-430e-9370-32d50d6efaa0","Type":"ContainerDied","Data":"c3e30a3469821ec5b3e1f5dfb90244227bb74b1319f67275b5eb46cd843e90a0"} Feb 17 17:47:35 crc kubenswrapper[4680]: I0217 17:47:35.771723 4680 generic.go:334] "Generic (PLEG): container finished" podID="a586c719-e0d8-4941-ae16-3034a8f66616" containerID="2555403ecc8f5f1d57765d96da42001c97626d816db84115c8b5e769fd12edd9" exitCode=0 Feb 17 17:47:35 crc kubenswrapper[4680]: I0217 17:47:35.771778 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76dxl" event={"ID":"a586c719-e0d8-4941-ae16-3034a8f66616","Type":"ContainerDied","Data":"2555403ecc8f5f1d57765d96da42001c97626d816db84115c8b5e769fd12edd9"} Feb 17 17:47:36 crc kubenswrapper[4680]: I0217 17:47:36.786195 4680 generic.go:334] "Generic (PLEG): container finished" podID="14614809-63c8-4553-aa29-c129132071b7" containerID="402d4ff776b7e9e39d814608388f87b45bede162fa7fb404720df47d064d025c" exitCode=0 Feb 17 17:47:36 crc kubenswrapper[4680]: I0217 17:47:36.786299 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4spzf" event={"ID":"14614809-63c8-4553-aa29-c129132071b7","Type":"ContainerDied","Data":"402d4ff776b7e9e39d814608388f87b45bede162fa7fb404720df47d064d025c"} Feb 17 17:47:36 crc kubenswrapper[4680]: I0217 17:47:36.788972 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76dxl" event={"ID":"a586c719-e0d8-4941-ae16-3034a8f66616","Type":"ContainerStarted","Data":"b106f59a0723f8f62232fb78b8daeadc72bd8947c9f1352ee8ba14994b57d874"} Feb 17 17:47:36 crc kubenswrapper[4680]: I0217 17:47:36.792669 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d4hnz" event={"ID":"f7d3d64c-68b5-430e-9370-32d50d6efaa0","Type":"ContainerStarted","Data":"389974a7b07fe59c1015bcd151c42abbec8c3c7fc7a20aa98254ffc2ae960674"} Feb 17 17:47:36 crc kubenswrapper[4680]: I0217 17:47:36.794193 4680 generic.go:334] "Generic (PLEG): container finished" podID="6ea8ef6d-a9c6-477e-89be-fa625c176d42" containerID="c5c403d76b186fc306294c351f57dc4ce333617204e44953e5af23ccf697f098" exitCode=0 Feb 17 17:47:36 crc kubenswrapper[4680]: I0217 17:47:36.794218 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hgx6" event={"ID":"6ea8ef6d-a9c6-477e-89be-fa625c176d42","Type":"ContainerDied","Data":"c5c403d76b186fc306294c351f57dc4ce333617204e44953e5af23ccf697f098"} Feb 17 17:47:36 crc kubenswrapper[4680]: I0217 17:47:36.846649 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d4hnz" podStartSLOduration=3.756524549 podStartE2EDuration="58.846628079s" podCreationTimestamp="2026-02-17 17:46:38 +0000 UTC" firstStartedPulling="2026-02-17 17:46:41.098365083 +0000 UTC m=+130.649263762" lastFinishedPulling="2026-02-17 17:47:36.188468613 +0000 UTC m=+185.739367292" observedRunningTime="2026-02-17 17:47:36.830693118 +0000 UTC m=+186.381591817" watchObservedRunningTime="2026-02-17 17:47:36.846628079 +0000 UTC m=+186.397526758" Feb 17 17:47:36 crc kubenswrapper[4680]: I0217 17:47:36.869127 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-76dxl" podStartSLOduration=3.051300186 podStartE2EDuration="55.869107445s" podCreationTimestamp="2026-02-17 17:46:41 +0000 UTC" firstStartedPulling="2026-02-17 17:46:43.437368172 +0000 UTC m=+132.988266851" lastFinishedPulling="2026-02-17 17:47:36.255175441 +0000 UTC m=+185.806074110" observedRunningTime="2026-02-17 17:47:36.867848575 +0000 UTC m=+186.418747244" watchObservedRunningTime="2026-02-17 17:47:36.869107445 +0000 UTC m=+186.420006124" Feb 17 17:47:37 crc kubenswrapper[4680]: I0217 17:47:37.809581 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkz99" event={"ID":"de1f714a-f726-49f5-b611-3b0690ed7245","Type":"ContainerStarted","Data":"68bf59842990d42a93f437aa9dfddbc1e8b7364d15250b15c76fdb2bdfe0461b"} Feb 17 17:47:37 crc kubenswrapper[4680]: I0217 17:47:37.815273 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hgx6" event={"ID":"6ea8ef6d-a9c6-477e-89be-fa625c176d42","Type":"ContainerStarted","Data":"54c38411af0a2d57b4d25f55728f6a49700d1bad0f59388cbee84440b77634d2"} Feb 17 17:47:37 crc kubenswrapper[4680]: I0217 17:47:37.817589 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4spzf" event={"ID":"14614809-63c8-4553-aa29-c129132071b7","Type":"ContainerStarted","Data":"01d41f551b2050b1a0aab9ae54365340548516b69b871f69821b1862c601b22e"} Feb 17 17:47:37 crc kubenswrapper[4680]: I0217 17:47:37.855355 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4spzf" podStartSLOduration=4.845910965 podStartE2EDuration="59.855335747s" podCreationTimestamp="2026-02-17 17:46:38 +0000 UTC" firstStartedPulling="2026-02-17 17:46:42.17474294 +0000 UTC m=+131.725641629" lastFinishedPulling="2026-02-17 17:47:37.184167732 +0000 UTC m=+186.735066411" observedRunningTime="2026-02-17 17:47:37.853487048 +0000 UTC m=+187.404385727" watchObservedRunningTime="2026-02-17 17:47:37.855335747 +0000 UTC m=+187.406234426" Feb 17 17:47:37 crc kubenswrapper[4680]: I0217 17:47:37.872810 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6hgx6" podStartSLOduration=2.770746536 podStartE2EDuration="59.872787535s" podCreationTimestamp="2026-02-17 17:46:38 +0000 UTC" firstStartedPulling="2026-02-17 17:46:40.140521309 +0000 UTC m=+129.691419988" lastFinishedPulling="2026-02-17 17:47:37.242562288 +0000 UTC m=+186.793460987" observedRunningTime="2026-02-17 17:47:37.870112311 +0000 UTC m=+187.421011000" watchObservedRunningTime="2026-02-17 17:47:37.872787535 +0000 UTC m=+187.423686214" Feb 17 17:47:38 crc kubenswrapper[4680]: I0217 17:47:38.685150 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6hgx6" Feb 17 17:47:38 crc kubenswrapper[4680]: I0217 17:47:38.685678 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6hgx6" Feb 17 17:47:38 crc kubenswrapper[4680]: I0217 17:47:38.824308 4680 generic.go:334] "Generic (PLEG): container finished" podID="de1f714a-f726-49f5-b611-3b0690ed7245" containerID="68bf59842990d42a93f437aa9dfddbc1e8b7364d15250b15c76fdb2bdfe0461b" exitCode=0 Feb 17 17:47:38 crc kubenswrapper[4680]: I0217 17:47:38.825275 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkz99" event={"ID":"de1f714a-f726-49f5-b611-3b0690ed7245","Type":"ContainerDied","Data":"68bf59842990d42a93f437aa9dfddbc1e8b7364d15250b15c76fdb2bdfe0461b"} Feb 17 17:47:39 crc kubenswrapper[4680]: I0217 17:47:39.075831 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d4hnz" Feb 17 17:47:39 crc kubenswrapper[4680]: I0217 17:47:39.075898 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d4hnz" Feb 17 17:47:39 crc kubenswrapper[4680]: I0217 17:47:39.112885 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d4hnz" Feb 17 17:47:39 crc kubenswrapper[4680]: I0217 17:47:39.309407 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4spzf" Feb 17 17:47:39 crc kubenswrapper[4680]: I0217 17:47:39.309453 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4spzf" Feb 17 17:47:39 crc kubenswrapper[4680]: I0217 17:47:39.368605 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4spzf" Feb 17 17:47:39 crc kubenswrapper[4680]: I0217 17:47:39.731678 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6hgx6" podUID="6ea8ef6d-a9c6-477e-89be-fa625c176d42" containerName="registry-server" probeResult="failure" output=< Feb 17 17:47:39 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 17:47:39 crc kubenswrapper[4680]: > Feb 17 17:47:39 crc kubenswrapper[4680]: I0217 17:47:39.831231 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkz99" event={"ID":"de1f714a-f726-49f5-b611-3b0690ed7245","Type":"ContainerStarted","Data":"b6b777c84136f51c7b5d52ffffe2fb82e6ba8e3edde05c570308a0ceb0ad470c"} Feb 17 17:47:39 crc kubenswrapper[4680]: I0217 17:47:39.852232 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kkz99" podStartSLOduration=2.982713077 podStartE2EDuration="59.852203996s" podCreationTimestamp="2026-02-17 17:46:40 +0000 UTC" firstStartedPulling="2026-02-17 17:46:42.356051492 +0000 UTC m=+131.906950171" lastFinishedPulling="2026-02-17 17:47:39.225542401 +0000 UTC m=+188.776441090" observedRunningTime="2026-02-17 17:47:39.84977312 +0000 UTC m=+189.400671799" watchObservedRunningTime="2026-02-17 17:47:39.852203996 +0000 UTC m=+189.403102675" Feb 17 17:47:40 crc kubenswrapper[4680]: I0217 17:47:40.540947 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kkz99" Feb 17 17:47:40 crc kubenswrapper[4680]: I0217 17:47:40.541262 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kkz99" Feb 17 17:47:40 crc kubenswrapper[4680]: I0217 17:47:40.631738 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" podUID="b1c48715-1326-41fd-8b2e-007578010c80" containerName="oauth-openshift" containerID="cri-o://c1ba09e1cfac89999cbbf7d1fe8539bf06d3bcbaa3e369ff8bd5332863f35f43" gracePeriod=15 Feb 17 17:47:40 crc kubenswrapper[4680]: I0217 17:47:40.819483 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-czdr4" Feb 17 17:47:40 crc kubenswrapper[4680]: I0217 17:47:40.819536 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-czdr4" Feb 17 17:47:40 crc kubenswrapper[4680]: I0217 17:47:40.869476 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-czdr4" Feb 17 17:47:40 crc kubenswrapper[4680]: I0217 17:47:40.915780 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-czdr4" Feb 17 17:47:41 crc kubenswrapper[4680]: I0217 17:47:41.383104 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-czdr4"] Feb 17 17:47:41 crc kubenswrapper[4680]: I0217 17:47:41.584239 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-kkz99" podUID="de1f714a-f726-49f5-b611-3b0690ed7245" containerName="registry-server" probeResult="failure" output=< Feb 17 17:47:41 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 17:47:41 crc kubenswrapper[4680]: > Feb 17 17:47:41 crc kubenswrapper[4680]: I0217 17:47:41.683777 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pdls7" Feb 17 17:47:41 crc kubenswrapper[4680]: I0217 17:47:41.717590 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pdls7" Feb 17 17:47:41 crc kubenswrapper[4680]: I0217 17:47:41.856819 4680 generic.go:334] "Generic (PLEG): container finished" podID="b1c48715-1326-41fd-8b2e-007578010c80" containerID="c1ba09e1cfac89999cbbf7d1fe8539bf06d3bcbaa3e369ff8bd5332863f35f43" exitCode=0 Feb 17 17:47:41 crc kubenswrapper[4680]: I0217 17:47:41.857477 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" event={"ID":"b1c48715-1326-41fd-8b2e-007578010c80","Type":"ContainerDied","Data":"c1ba09e1cfac89999cbbf7d1fe8539bf06d3bcbaa3e369ff8bd5332863f35f43"} Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.043135 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-76dxl" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.043538 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-76dxl" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.068252 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.095304 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6644f974c8-zdsbm"] Feb 17 17:47:42 crc kubenswrapper[4680]: E0217 17:47:42.095598 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1c48715-1326-41fd-8b2e-007578010c80" containerName="oauth-openshift" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.095631 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1c48715-1326-41fd-8b2e-007578010c80" containerName="oauth-openshift" Feb 17 17:47:42 crc kubenswrapper[4680]: E0217 17:47:42.095655 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a05b11fb-4342-4ed8-afcd-beee4fdc64c3" containerName="pruner" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.095663 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a05b11fb-4342-4ed8-afcd-beee4fdc64c3" containerName="pruner" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.095787 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1c48715-1326-41fd-8b2e-007578010c80" containerName="oauth-openshift" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.095805 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="a05b11fb-4342-4ed8-afcd-beee4fdc64c3" containerName="pruner" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.096257 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.109530 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-ocp-branding-template\") pod \"b1c48715-1326-41fd-8b2e-007578010c80\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.109587 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hmtx\" (UniqueName: \"kubernetes.io/projected/b1c48715-1326-41fd-8b2e-007578010c80-kube-api-access-8hmtx\") pod \"b1c48715-1326-41fd-8b2e-007578010c80\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.109644 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-trusted-ca-bundle\") pod \"b1c48715-1326-41fd-8b2e-007578010c80\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.109672 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-service-ca\") pod \"b1c48715-1326-41fd-8b2e-007578010c80\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.109712 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-login\") pod \"b1c48715-1326-41fd-8b2e-007578010c80\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.109737 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-audit-policies\") pod \"b1c48715-1326-41fd-8b2e-007578010c80\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.109761 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-idp-0-file-data\") pod \"b1c48715-1326-41fd-8b2e-007578010c80\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.109790 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-error\") pod \"b1c48715-1326-41fd-8b2e-007578010c80\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.109839 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-router-certs\") pod \"b1c48715-1326-41fd-8b2e-007578010c80\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.109888 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b1c48715-1326-41fd-8b2e-007578010c80-audit-dir\") pod \"b1c48715-1326-41fd-8b2e-007578010c80\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.109912 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-session\") pod \"b1c48715-1326-41fd-8b2e-007578010c80\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.109943 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-provider-selection\") pod \"b1c48715-1326-41fd-8b2e-007578010c80\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.109971 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-cliconfig\") pod \"b1c48715-1326-41fd-8b2e-007578010c80\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.109992 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-serving-cert\") pod \"b1c48715-1326-41fd-8b2e-007578010c80\" (UID: \"b1c48715-1326-41fd-8b2e-007578010c80\") " Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.113945 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b1c48715-1326-41fd-8b2e-007578010c80" (UID: "b1c48715-1326-41fd-8b2e-007578010c80"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.114309 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b1c48715-1326-41fd-8b2e-007578010c80" (UID: "b1c48715-1326-41fd-8b2e-007578010c80"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.114611 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b1c48715-1326-41fd-8b2e-007578010c80" (UID: "b1c48715-1326-41fd-8b2e-007578010c80"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.116268 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6644f974c8-zdsbm"] Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.116958 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b1c48715-1326-41fd-8b2e-007578010c80" (UID: "b1c48715-1326-41fd-8b2e-007578010c80"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.119727 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b1c48715-1326-41fd-8b2e-007578010c80" (UID: "b1c48715-1326-41fd-8b2e-007578010c80"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.119990 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b1c48715-1326-41fd-8b2e-007578010c80" (UID: "b1c48715-1326-41fd-8b2e-007578010c80"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.120084 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1c48715-1326-41fd-8b2e-007578010c80-kube-api-access-8hmtx" (OuterVolumeSpecName: "kube-api-access-8hmtx") pod "b1c48715-1326-41fd-8b2e-007578010c80" (UID: "b1c48715-1326-41fd-8b2e-007578010c80"). InnerVolumeSpecName "kube-api-access-8hmtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.120209 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1c48715-1326-41fd-8b2e-007578010c80-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b1c48715-1326-41fd-8b2e-007578010c80" (UID: "b1c48715-1326-41fd-8b2e-007578010c80"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.123541 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b1c48715-1326-41fd-8b2e-007578010c80" (UID: "b1c48715-1326-41fd-8b2e-007578010c80"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.146360 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b1c48715-1326-41fd-8b2e-007578010c80" (UID: "b1c48715-1326-41fd-8b2e-007578010c80"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.146828 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b1c48715-1326-41fd-8b2e-007578010c80" (UID: "b1c48715-1326-41fd-8b2e-007578010c80"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.147261 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b1c48715-1326-41fd-8b2e-007578010c80" (UID: "b1c48715-1326-41fd-8b2e-007578010c80"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.156828 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b1c48715-1326-41fd-8b2e-007578010c80" (UID: "b1c48715-1326-41fd-8b2e-007578010c80"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.157381 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b1c48715-1326-41fd-8b2e-007578010c80" (UID: "b1c48715-1326-41fd-8b2e-007578010c80"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.211582 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-router-certs\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.211729 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca6f4fb7-b47d-4917-801a-4c4262c481a6-audit-dir\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.211755 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca6f4fb7-b47d-4917-801a-4c4262c481a6-audit-policies\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.211787 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-user-template-error\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.211812 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-service-ca\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.211836 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tmps\" (UniqueName: \"kubernetes.io/projected/ca6f4fb7-b47d-4917-801a-4c4262c481a6-kube-api-access-8tmps\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.211859 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.211895 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.211918 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.211944 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.211977 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.212004 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-session\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.212034 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.212060 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-user-template-login\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.212107 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.212124 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.212135 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.212148 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.212161 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hmtx\" (UniqueName: \"kubernetes.io/projected/b1c48715-1326-41fd-8b2e-007578010c80-kube-api-access-8hmtx\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.212174 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.212189 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.212201 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.212212 4680 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b1c48715-1326-41fd-8b2e-007578010c80-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.212223 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.212236 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.212248 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.212261 4680 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b1c48715-1326-41fd-8b2e-007578010c80-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.212275 4680 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b1c48715-1326-41fd-8b2e-007578010c80-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.313489 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-router-certs\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.313872 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca6f4fb7-b47d-4917-801a-4c4262c481a6-audit-dir\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.314036 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca6f4fb7-b47d-4917-801a-4c4262c481a6-audit-policies\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.314120 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-user-template-error\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.314204 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-service-ca\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.313935 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca6f4fb7-b47d-4917-801a-4c4262c481a6-audit-dir\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.314288 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tmps\" (UniqueName: \"kubernetes.io/projected/ca6f4fb7-b47d-4917-801a-4c4262c481a6-kube-api-access-8tmps\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.314448 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.314535 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.314639 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.314722 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.314789 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca6f4fb7-b47d-4917-801a-4c4262c481a6-audit-policies\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.314880 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.314955 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-session\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.315040 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.315118 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-user-template-login\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.315136 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-service-ca\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.316167 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.316183 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.317155 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-user-template-error\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.317429 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.317552 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.317583 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-router-certs\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.320484 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.322182 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.323350 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-system-session\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.324857 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ca6f4fb7-b47d-4917-801a-4c4262c481a6-v4-0-config-user-template-login\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.340104 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tmps\" (UniqueName: \"kubernetes.io/projected/ca6f4fb7-b47d-4917-801a-4c4262c481a6-kube-api-access-8tmps\") pod \"oauth-openshift-6644f974c8-zdsbm\" (UID: \"ca6f4fb7-b47d-4917-801a-4c4262c481a6\") " pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.410040 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.649507 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6644f974c8-zdsbm"] Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.868714 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" event={"ID":"ca6f4fb7-b47d-4917-801a-4c4262c481a6","Type":"ContainerStarted","Data":"feb99043382f0e5faf33a18fc3b8f8b1ec0d073ca4c18051aa8176972a78aa90"} Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.870305 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-czdr4" podUID="03756e56-4c50-4e20-91b1-16fdea59c4a0" containerName="registry-server" containerID="cri-o://635f3aa57cb48054c99f8a6c2afa4e46ac1c88c8fd493cb98c8f04e6ae018816" gracePeriod=2 Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.870409 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" event={"ID":"b1c48715-1326-41fd-8b2e-007578010c80","Type":"ContainerDied","Data":"fa34fb27a3599155d7cd9bd4be4ddcfea1efd1f867d401958039d611a8125338"} Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.870503 4680 scope.go:117] "RemoveContainer" containerID="c1ba09e1cfac89999cbbf7d1fe8539bf06d3bcbaa3e369ff8bd5332863f35f43" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.870833 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qxkds" Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.913342 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qxkds"] Feb 17 17:47:42 crc kubenswrapper[4680]: I0217 17:47:42.919568 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qxkds"] Feb 17 17:47:43 crc kubenswrapper[4680]: I0217 17:47:43.086999 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-76dxl" podUID="a586c719-e0d8-4941-ae16-3034a8f66616" containerName="registry-server" probeResult="failure" output=< Feb 17 17:47:43 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 17:47:43 crc kubenswrapper[4680]: > Feb 17 17:47:43 crc kubenswrapper[4680]: I0217 17:47:43.111181 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1c48715-1326-41fd-8b2e-007578010c80" path="/var/lib/kubelet/pods/b1c48715-1326-41fd-8b2e-007578010c80/volumes" Feb 17 17:47:43 crc kubenswrapper[4680]: I0217 17:47:43.877144 4680 generic.go:334] "Generic (PLEG): container finished" podID="03756e56-4c50-4e20-91b1-16fdea59c4a0" containerID="635f3aa57cb48054c99f8a6c2afa4e46ac1c88c8fd493cb98c8f04e6ae018816" exitCode=0 Feb 17 17:47:43 crc kubenswrapper[4680]: I0217 17:47:43.877265 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czdr4" event={"ID":"03756e56-4c50-4e20-91b1-16fdea59c4a0","Type":"ContainerDied","Data":"635f3aa57cb48054c99f8a6c2afa4e46ac1c88c8fd493cb98c8f04e6ae018816"} Feb 17 17:47:43 crc kubenswrapper[4680]: I0217 17:47:43.879505 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" event={"ID":"ca6f4fb7-b47d-4917-801a-4c4262c481a6","Type":"ContainerStarted","Data":"7666ae4876cfba8b808a441342e2719c449d087236b847f06bbb167668d68652"} Feb 17 17:47:43 crc kubenswrapper[4680]: I0217 17:47:43.879951 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:43 crc kubenswrapper[4680]: I0217 17:47:43.899594 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" podStartSLOduration=28.899576552 podStartE2EDuration="28.899576552s" podCreationTimestamp="2026-02-17 17:47:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:47:43.898272741 +0000 UTC m=+193.449171440" watchObservedRunningTime="2026-02-17 17:47:43.899576552 +0000 UTC m=+193.450475231" Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.300939 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-czdr4" Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.342469 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03756e56-4c50-4e20-91b1-16fdea59c4a0-utilities\") pod \"03756e56-4c50-4e20-91b1-16fdea59c4a0\" (UID: \"03756e56-4c50-4e20-91b1-16fdea59c4a0\") " Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.342555 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w446\" (UniqueName: \"kubernetes.io/projected/03756e56-4c50-4e20-91b1-16fdea59c4a0-kube-api-access-6w446\") pod \"03756e56-4c50-4e20-91b1-16fdea59c4a0\" (UID: \"03756e56-4c50-4e20-91b1-16fdea59c4a0\") " Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.342665 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03756e56-4c50-4e20-91b1-16fdea59c4a0-catalog-content\") pod \"03756e56-4c50-4e20-91b1-16fdea59c4a0\" (UID: \"03756e56-4c50-4e20-91b1-16fdea59c4a0\") " Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.343726 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03756e56-4c50-4e20-91b1-16fdea59c4a0-utilities" (OuterVolumeSpecName: "utilities") pod "03756e56-4c50-4e20-91b1-16fdea59c4a0" (UID: "03756e56-4c50-4e20-91b1-16fdea59c4a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.353579 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03756e56-4c50-4e20-91b1-16fdea59c4a0-kube-api-access-6w446" (OuterVolumeSpecName: "kube-api-access-6w446") pod "03756e56-4c50-4e20-91b1-16fdea59c4a0" (UID: "03756e56-4c50-4e20-91b1-16fdea59c4a0"). InnerVolumeSpecName "kube-api-access-6w446". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.366918 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03756e56-4c50-4e20-91b1-16fdea59c4a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03756e56-4c50-4e20-91b1-16fdea59c4a0" (UID: "03756e56-4c50-4e20-91b1-16fdea59c4a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.443775 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03756e56-4c50-4e20-91b1-16fdea59c4a0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.443803 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03756e56-4c50-4e20-91b1-16fdea59c4a0-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.443813 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w446\" (UniqueName: \"kubernetes.io/projected/03756e56-4c50-4e20-91b1-16fdea59c4a0-kube-api-access-6w446\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.534126 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6644f974c8-zdsbm" Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.887437 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czdr4" event={"ID":"03756e56-4c50-4e20-91b1-16fdea59c4a0","Type":"ContainerDied","Data":"0e98dc2eeee3409eec2ac800cbd37e08d99c03fa9b4c4ee92fd33a40063b8850"} Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.887510 4680 scope.go:117] "RemoveContainer" containerID="635f3aa57cb48054c99f8a6c2afa4e46ac1c88c8fd493cb98c8f04e6ae018816" Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.887606 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-czdr4" Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.911133 4680 scope.go:117] "RemoveContainer" containerID="e1a558590d632968b5ff6c913eb7cfacf892c6cfeb44f4d02696c293c106c6ea" Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.919235 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-czdr4"] Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.923736 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-czdr4"] Feb 17 17:47:44 crc kubenswrapper[4680]: I0217 17:47:44.942411 4680 scope.go:117] "RemoveContainer" containerID="183033e40f62f9955d6d43c72080738369f7aee0329f28f4985c340a34c16a5f" Feb 17 17:47:45 crc kubenswrapper[4680]: I0217 17:47:45.114558 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03756e56-4c50-4e20-91b1-16fdea59c4a0" path="/var/lib/kubelet/pods/03756e56-4c50-4e20-91b1-16fdea59c4a0/volumes" Feb 17 17:47:48 crc kubenswrapper[4680]: I0217 17:47:48.731475 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6hgx6" Feb 17 17:47:48 crc kubenswrapper[4680]: I0217 17:47:48.775540 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6hgx6" Feb 17 17:47:49 crc kubenswrapper[4680]: I0217 17:47:49.127821 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d4hnz" Feb 17 17:47:49 crc kubenswrapper[4680]: I0217 17:47:49.368760 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4spzf" Feb 17 17:47:50 crc kubenswrapper[4680]: I0217 17:47:50.586201 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kkz99" Feb 17 17:47:50 crc kubenswrapper[4680]: I0217 17:47:50.622495 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kkz99" Feb 17 17:47:50 crc kubenswrapper[4680]: I0217 17:47:50.983976 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d4hnz"] Feb 17 17:47:50 crc kubenswrapper[4680]: I0217 17:47:50.984462 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d4hnz" podUID="f7d3d64c-68b5-430e-9370-32d50d6efaa0" containerName="registry-server" containerID="cri-o://389974a7b07fe59c1015bcd151c42abbec8c3c7fc7a20aa98254ffc2ae960674" gracePeriod=2 Feb 17 17:47:51 crc kubenswrapper[4680]: I0217 17:47:51.586408 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4spzf"] Feb 17 17:47:51 crc kubenswrapper[4680]: I0217 17:47:51.586715 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4spzf" podUID="14614809-63c8-4553-aa29-c129132071b7" containerName="registry-server" containerID="cri-o://01d41f551b2050b1a0aab9ae54365340548516b69b871f69821b1862c601b22e" gracePeriod=2 Feb 17 17:47:51 crc kubenswrapper[4680]: I0217 17:47:51.936306 4680 generic.go:334] "Generic (PLEG): container finished" podID="f7d3d64c-68b5-430e-9370-32d50d6efaa0" containerID="389974a7b07fe59c1015bcd151c42abbec8c3c7fc7a20aa98254ffc2ae960674" exitCode=0 Feb 17 17:47:51 crc kubenswrapper[4680]: I0217 17:47:51.936352 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d4hnz" event={"ID":"f7d3d64c-68b5-430e-9370-32d50d6efaa0","Type":"ContainerDied","Data":"389974a7b07fe59c1015bcd151c42abbec8c3c7fc7a20aa98254ffc2ae960674"} Feb 17 17:47:51 crc kubenswrapper[4680]: I0217 17:47:51.947267 4680 generic.go:334] "Generic (PLEG): container finished" podID="14614809-63c8-4553-aa29-c129132071b7" containerID="01d41f551b2050b1a0aab9ae54365340548516b69b871f69821b1862c601b22e" exitCode=0 Feb 17 17:47:51 crc kubenswrapper[4680]: I0217 17:47:51.947336 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4spzf" event={"ID":"14614809-63c8-4553-aa29-c129132071b7","Type":"ContainerDied","Data":"01d41f551b2050b1a0aab9ae54365340548516b69b871f69821b1862c601b22e"} Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:51.996900 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4spzf" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.000237 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d4hnz" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.031604 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7d3d64c-68b5-430e-9370-32d50d6efaa0-catalog-content\") pod \"f7d3d64c-68b5-430e-9370-32d50d6efaa0\" (UID: \"f7d3d64c-68b5-430e-9370-32d50d6efaa0\") " Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.031708 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14614809-63c8-4553-aa29-c129132071b7-catalog-content\") pod \"14614809-63c8-4553-aa29-c129132071b7\" (UID: \"14614809-63c8-4553-aa29-c129132071b7\") " Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.031737 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szfwf\" (UniqueName: \"kubernetes.io/projected/14614809-63c8-4553-aa29-c129132071b7-kube-api-access-szfwf\") pod \"14614809-63c8-4553-aa29-c129132071b7\" (UID: \"14614809-63c8-4553-aa29-c129132071b7\") " Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.031792 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14614809-63c8-4553-aa29-c129132071b7-utilities\") pod \"14614809-63c8-4553-aa29-c129132071b7\" (UID: \"14614809-63c8-4553-aa29-c129132071b7\") " Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.031841 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8tqr\" (UniqueName: \"kubernetes.io/projected/f7d3d64c-68b5-430e-9370-32d50d6efaa0-kube-api-access-n8tqr\") pod \"f7d3d64c-68b5-430e-9370-32d50d6efaa0\" (UID: \"f7d3d64c-68b5-430e-9370-32d50d6efaa0\") " Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.031876 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7d3d64c-68b5-430e-9370-32d50d6efaa0-utilities\") pod \"f7d3d64c-68b5-430e-9370-32d50d6efaa0\" (UID: \"f7d3d64c-68b5-430e-9370-32d50d6efaa0\") " Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.033753 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14614809-63c8-4553-aa29-c129132071b7-utilities" (OuterVolumeSpecName: "utilities") pod "14614809-63c8-4553-aa29-c129132071b7" (UID: "14614809-63c8-4553-aa29-c129132071b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.040885 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7d3d64c-68b5-430e-9370-32d50d6efaa0-kube-api-access-n8tqr" (OuterVolumeSpecName: "kube-api-access-n8tqr") pod "f7d3d64c-68b5-430e-9370-32d50d6efaa0" (UID: "f7d3d64c-68b5-430e-9370-32d50d6efaa0"). InnerVolumeSpecName "kube-api-access-n8tqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.041394 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7d3d64c-68b5-430e-9370-32d50d6efaa0-utilities" (OuterVolumeSpecName: "utilities") pod "f7d3d64c-68b5-430e-9370-32d50d6efaa0" (UID: "f7d3d64c-68b5-430e-9370-32d50d6efaa0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.054484 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14614809-63c8-4553-aa29-c129132071b7-kube-api-access-szfwf" (OuterVolumeSpecName: "kube-api-access-szfwf") pod "14614809-63c8-4553-aa29-c129132071b7" (UID: "14614809-63c8-4553-aa29-c129132071b7"). InnerVolumeSpecName "kube-api-access-szfwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.082094 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-76dxl" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.094788 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14614809-63c8-4553-aa29-c129132071b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14614809-63c8-4553-aa29-c129132071b7" (UID: "14614809-63c8-4553-aa29-c129132071b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.094988 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7d3d64c-68b5-430e-9370-32d50d6efaa0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7d3d64c-68b5-430e-9370-32d50d6efaa0" (UID: "f7d3d64c-68b5-430e-9370-32d50d6efaa0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.120407 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-76dxl" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.133082 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7d3d64c-68b5-430e-9370-32d50d6efaa0-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.133119 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7d3d64c-68b5-430e-9370-32d50d6efaa0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.133135 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14614809-63c8-4553-aa29-c129132071b7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.133148 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szfwf\" (UniqueName: \"kubernetes.io/projected/14614809-63c8-4553-aa29-c129132071b7-kube-api-access-szfwf\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.133158 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14614809-63c8-4553-aa29-c129132071b7-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.133166 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8tqr\" (UniqueName: \"kubernetes.io/projected/f7d3d64c-68b5-430e-9370-32d50d6efaa0-kube-api-access-n8tqr\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.958922 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d4hnz" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.958830 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d4hnz" event={"ID":"f7d3d64c-68b5-430e-9370-32d50d6efaa0","Type":"ContainerDied","Data":"b85f0c2626d73082593fac9ec83e2ea9f37352ce6937e14ab354830e761063cf"} Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.959438 4680 scope.go:117] "RemoveContainer" containerID="389974a7b07fe59c1015bcd151c42abbec8c3c7fc7a20aa98254ffc2ae960674" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.963002 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4spzf" event={"ID":"14614809-63c8-4553-aa29-c129132071b7","Type":"ContainerDied","Data":"8215b1596ac0e40c2b67f81d5ce1ac6bdf47a0224965e436534de47ef4cb03a4"} Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.963063 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4spzf" Feb 17 17:47:52 crc kubenswrapper[4680]: I0217 17:47:52.984132 4680 scope.go:117] "RemoveContainer" containerID="c3e30a3469821ec5b3e1f5dfb90244227bb74b1319f67275b5eb46cd843e90a0" Feb 17 17:47:53 crc kubenswrapper[4680]: I0217 17:47:53.004683 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4spzf"] Feb 17 17:47:53 crc kubenswrapper[4680]: I0217 17:47:53.004798 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4spzf"] Feb 17 17:47:53 crc kubenswrapper[4680]: I0217 17:47:53.006792 4680 scope.go:117] "RemoveContainer" containerID="2db33c692103b6519d0e2d6e7b965a0878eb61dba8a1a28fea254c338c8dae34" Feb 17 17:47:53 crc kubenswrapper[4680]: I0217 17:47:53.020918 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d4hnz"] Feb 17 17:47:53 crc kubenswrapper[4680]: I0217 17:47:53.030991 4680 scope.go:117] "RemoveContainer" containerID="01d41f551b2050b1a0aab9ae54365340548516b69b871f69821b1862c601b22e" Feb 17 17:47:53 crc kubenswrapper[4680]: I0217 17:47:53.032906 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d4hnz"] Feb 17 17:47:53 crc kubenswrapper[4680]: I0217 17:47:53.043213 4680 scope.go:117] "RemoveContainer" containerID="402d4ff776b7e9e39d814608388f87b45bede162fa7fb404720df47d064d025c" Feb 17 17:47:53 crc kubenswrapper[4680]: I0217 17:47:53.056446 4680 scope.go:117] "RemoveContainer" containerID="77a5a4732852c667a32656dd72bc2eff4aad01e89db3d827d5d467badaead33f" Feb 17 17:47:53 crc kubenswrapper[4680]: I0217 17:47:53.111418 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14614809-63c8-4553-aa29-c129132071b7" path="/var/lib/kubelet/pods/14614809-63c8-4553-aa29-c129132071b7/volumes" Feb 17 17:47:53 crc kubenswrapper[4680]: I0217 17:47:53.112933 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7d3d64c-68b5-430e-9370-32d50d6efaa0" path="/var/lib/kubelet/pods/f7d3d64c-68b5-430e-9370-32d50d6efaa0/volumes" Feb 17 17:47:55 crc kubenswrapper[4680]: I0217 17:47:55.985860 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-76dxl"] Feb 17 17:47:55 crc kubenswrapper[4680]: I0217 17:47:55.986807 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-76dxl" podUID="a586c719-e0d8-4941-ae16-3034a8f66616" containerName="registry-server" containerID="cri-o://b106f59a0723f8f62232fb78b8daeadc72bd8947c9f1352ee8ba14994b57d874" gracePeriod=2 Feb 17 17:47:56 crc kubenswrapper[4680]: I0217 17:47:56.414180 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76dxl" Feb 17 17:47:56 crc kubenswrapper[4680]: I0217 17:47:56.494264 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntrq5\" (UniqueName: \"kubernetes.io/projected/a586c719-e0d8-4941-ae16-3034a8f66616-kube-api-access-ntrq5\") pod \"a586c719-e0d8-4941-ae16-3034a8f66616\" (UID: \"a586c719-e0d8-4941-ae16-3034a8f66616\") " Feb 17 17:47:56 crc kubenswrapper[4680]: I0217 17:47:56.494369 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a586c719-e0d8-4941-ae16-3034a8f66616-utilities\") pod \"a586c719-e0d8-4941-ae16-3034a8f66616\" (UID: \"a586c719-e0d8-4941-ae16-3034a8f66616\") " Feb 17 17:47:56 crc kubenswrapper[4680]: I0217 17:47:56.494410 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a586c719-e0d8-4941-ae16-3034a8f66616-catalog-content\") pod \"a586c719-e0d8-4941-ae16-3034a8f66616\" (UID: \"a586c719-e0d8-4941-ae16-3034a8f66616\") " Feb 17 17:47:56 crc kubenswrapper[4680]: I0217 17:47:56.495937 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a586c719-e0d8-4941-ae16-3034a8f66616-utilities" (OuterVolumeSpecName: "utilities") pod "a586c719-e0d8-4941-ae16-3034a8f66616" (UID: "a586c719-e0d8-4941-ae16-3034a8f66616"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:47:56 crc kubenswrapper[4680]: I0217 17:47:56.501616 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a586c719-e0d8-4941-ae16-3034a8f66616-kube-api-access-ntrq5" (OuterVolumeSpecName: "kube-api-access-ntrq5") pod "a586c719-e0d8-4941-ae16-3034a8f66616" (UID: "a586c719-e0d8-4941-ae16-3034a8f66616"). InnerVolumeSpecName "kube-api-access-ntrq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:47:56 crc kubenswrapper[4680]: I0217 17:47:56.595866 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a586c719-e0d8-4941-ae16-3034a8f66616-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:56 crc kubenswrapper[4680]: I0217 17:47:56.595917 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntrq5\" (UniqueName: \"kubernetes.io/projected/a586c719-e0d8-4941-ae16-3034a8f66616-kube-api-access-ntrq5\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:56 crc kubenswrapper[4680]: I0217 17:47:56.630798 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a586c719-e0d8-4941-ae16-3034a8f66616-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a586c719-e0d8-4941-ae16-3034a8f66616" (UID: "a586c719-e0d8-4941-ae16-3034a8f66616"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:47:56 crc kubenswrapper[4680]: I0217 17:47:56.697593 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a586c719-e0d8-4941-ae16-3034a8f66616-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:47:56 crc kubenswrapper[4680]: I0217 17:47:56.985503 4680 generic.go:334] "Generic (PLEG): container finished" podID="a586c719-e0d8-4941-ae16-3034a8f66616" containerID="b106f59a0723f8f62232fb78b8daeadc72bd8947c9f1352ee8ba14994b57d874" exitCode=0 Feb 17 17:47:56 crc kubenswrapper[4680]: I0217 17:47:56.985540 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76dxl" event={"ID":"a586c719-e0d8-4941-ae16-3034a8f66616","Type":"ContainerDied","Data":"b106f59a0723f8f62232fb78b8daeadc72bd8947c9f1352ee8ba14994b57d874"} Feb 17 17:47:56 crc kubenswrapper[4680]: I0217 17:47:56.985576 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76dxl" event={"ID":"a586c719-e0d8-4941-ae16-3034a8f66616","Type":"ContainerDied","Data":"ac1e9ad1f0b927f9871fb1aaf0edc9b07361765832b11a4efa994b26c4e1e2e8"} Feb 17 17:47:56 crc kubenswrapper[4680]: I0217 17:47:56.985596 4680 scope.go:117] "RemoveContainer" containerID="b106f59a0723f8f62232fb78b8daeadc72bd8947c9f1352ee8ba14994b57d874" Feb 17 17:47:56 crc kubenswrapper[4680]: I0217 17:47:56.985524 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76dxl" Feb 17 17:47:57 crc kubenswrapper[4680]: I0217 17:47:57.002008 4680 scope.go:117] "RemoveContainer" containerID="2555403ecc8f5f1d57765d96da42001c97626d816db84115c8b5e769fd12edd9" Feb 17 17:47:57 crc kubenswrapper[4680]: I0217 17:47:57.016155 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-76dxl"] Feb 17 17:47:57 crc kubenswrapper[4680]: I0217 17:47:57.027603 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-76dxl"] Feb 17 17:47:57 crc kubenswrapper[4680]: I0217 17:47:57.027771 4680 scope.go:117] "RemoveContainer" containerID="f2b00c8fce0898083fd3c2944d197fd85a6ff9915f1d8d8280fd56f8ed8974c2" Feb 17 17:47:57 crc kubenswrapper[4680]: I0217 17:47:57.042453 4680 scope.go:117] "RemoveContainer" containerID="b106f59a0723f8f62232fb78b8daeadc72bd8947c9f1352ee8ba14994b57d874" Feb 17 17:47:57 crc kubenswrapper[4680]: E0217 17:47:57.042859 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b106f59a0723f8f62232fb78b8daeadc72bd8947c9f1352ee8ba14994b57d874\": container with ID starting with b106f59a0723f8f62232fb78b8daeadc72bd8947c9f1352ee8ba14994b57d874 not found: ID does not exist" containerID="b106f59a0723f8f62232fb78b8daeadc72bd8947c9f1352ee8ba14994b57d874" Feb 17 17:47:57 crc kubenswrapper[4680]: I0217 17:47:57.042901 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b106f59a0723f8f62232fb78b8daeadc72bd8947c9f1352ee8ba14994b57d874"} err="failed to get container status \"b106f59a0723f8f62232fb78b8daeadc72bd8947c9f1352ee8ba14994b57d874\": rpc error: code = NotFound desc = could not find container \"b106f59a0723f8f62232fb78b8daeadc72bd8947c9f1352ee8ba14994b57d874\": container with ID starting with b106f59a0723f8f62232fb78b8daeadc72bd8947c9f1352ee8ba14994b57d874 not found: ID does not exist" Feb 17 17:47:57 crc kubenswrapper[4680]: I0217 17:47:57.042954 4680 scope.go:117] "RemoveContainer" containerID="2555403ecc8f5f1d57765d96da42001c97626d816db84115c8b5e769fd12edd9" Feb 17 17:47:57 crc kubenswrapper[4680]: E0217 17:47:57.043178 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2555403ecc8f5f1d57765d96da42001c97626d816db84115c8b5e769fd12edd9\": container with ID starting with 2555403ecc8f5f1d57765d96da42001c97626d816db84115c8b5e769fd12edd9 not found: ID does not exist" containerID="2555403ecc8f5f1d57765d96da42001c97626d816db84115c8b5e769fd12edd9" Feb 17 17:47:57 crc kubenswrapper[4680]: I0217 17:47:57.043204 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2555403ecc8f5f1d57765d96da42001c97626d816db84115c8b5e769fd12edd9"} err="failed to get container status \"2555403ecc8f5f1d57765d96da42001c97626d816db84115c8b5e769fd12edd9\": rpc error: code = NotFound desc = could not find container \"2555403ecc8f5f1d57765d96da42001c97626d816db84115c8b5e769fd12edd9\": container with ID starting with 2555403ecc8f5f1d57765d96da42001c97626d816db84115c8b5e769fd12edd9 not found: ID does not exist" Feb 17 17:47:57 crc kubenswrapper[4680]: I0217 17:47:57.043234 4680 scope.go:117] "RemoveContainer" containerID="f2b00c8fce0898083fd3c2944d197fd85a6ff9915f1d8d8280fd56f8ed8974c2" Feb 17 17:47:57 crc kubenswrapper[4680]: E0217 17:47:57.043838 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2b00c8fce0898083fd3c2944d197fd85a6ff9915f1d8d8280fd56f8ed8974c2\": container with ID starting with f2b00c8fce0898083fd3c2944d197fd85a6ff9915f1d8d8280fd56f8ed8974c2 not found: ID does not exist" containerID="f2b00c8fce0898083fd3c2944d197fd85a6ff9915f1d8d8280fd56f8ed8974c2" Feb 17 17:47:57 crc kubenswrapper[4680]: I0217 17:47:57.043932 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2b00c8fce0898083fd3c2944d197fd85a6ff9915f1d8d8280fd56f8ed8974c2"} err="failed to get container status \"f2b00c8fce0898083fd3c2944d197fd85a6ff9915f1d8d8280fd56f8ed8974c2\": rpc error: code = NotFound desc = could not find container \"f2b00c8fce0898083fd3c2944d197fd85a6ff9915f1d8d8280fd56f8ed8974c2\": container with ID starting with f2b00c8fce0898083fd3c2944d197fd85a6ff9915f1d8d8280fd56f8ed8974c2 not found: ID does not exist" Feb 17 17:47:57 crc kubenswrapper[4680]: I0217 17:47:57.111422 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a586c719-e0d8-4941-ae16-3034a8f66616" path="/var/lib/kubelet/pods/a586c719-e0d8-4941-ae16-3034a8f66616/volumes" Feb 17 17:47:58 crc kubenswrapper[4680]: I0217 17:47:58.350506 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.823717 4680 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.824153 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d3d64c-68b5-430e-9370-32d50d6efaa0" containerName="registry-server" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824166 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d3d64c-68b5-430e-9370-32d50d6efaa0" containerName="registry-server" Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.824177 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a586c719-e0d8-4941-ae16-3034a8f66616" containerName="registry-server" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824183 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a586c719-e0d8-4941-ae16-3034a8f66616" containerName="registry-server" Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.824194 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d3d64c-68b5-430e-9370-32d50d6efaa0" containerName="extract-content" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824201 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d3d64c-68b5-430e-9370-32d50d6efaa0" containerName="extract-content" Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.824211 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03756e56-4c50-4e20-91b1-16fdea59c4a0" containerName="registry-server" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824216 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="03756e56-4c50-4e20-91b1-16fdea59c4a0" containerName="registry-server" Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.824222 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a586c719-e0d8-4941-ae16-3034a8f66616" containerName="extract-content" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824228 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a586c719-e0d8-4941-ae16-3034a8f66616" containerName="extract-content" Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.824235 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03756e56-4c50-4e20-91b1-16fdea59c4a0" containerName="extract-utilities" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824241 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="03756e56-4c50-4e20-91b1-16fdea59c4a0" containerName="extract-utilities" Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.824252 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03756e56-4c50-4e20-91b1-16fdea59c4a0" containerName="extract-content" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824258 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="03756e56-4c50-4e20-91b1-16fdea59c4a0" containerName="extract-content" Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.824267 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14614809-63c8-4553-aa29-c129132071b7" containerName="extract-utilities" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824272 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="14614809-63c8-4553-aa29-c129132071b7" containerName="extract-utilities" Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.824282 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14614809-63c8-4553-aa29-c129132071b7" containerName="registry-server" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824287 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="14614809-63c8-4553-aa29-c129132071b7" containerName="registry-server" Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.824295 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14614809-63c8-4553-aa29-c129132071b7" containerName="extract-content" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824300 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="14614809-63c8-4553-aa29-c129132071b7" containerName="extract-content" Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.824310 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d3d64c-68b5-430e-9370-32d50d6efaa0" containerName="extract-utilities" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824330 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d3d64c-68b5-430e-9370-32d50d6efaa0" containerName="extract-utilities" Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.824338 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a586c719-e0d8-4941-ae16-3034a8f66616" containerName="extract-utilities" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824344 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a586c719-e0d8-4941-ae16-3034a8f66616" containerName="extract-utilities" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824430 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="a586c719-e0d8-4941-ae16-3034a8f66616" containerName="registry-server" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824441 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="14614809-63c8-4553-aa29-c129132071b7" containerName="registry-server" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824450 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="03756e56-4c50-4e20-91b1-16fdea59c4a0" containerName="registry-server" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824457 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d3d64c-68b5-430e-9370-32d50d6efaa0" containerName="registry-server" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824736 4680 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.824949 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.825400 4680 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.825509 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.825516 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.825526 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.825532 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.825539 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.825545 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.825554 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.825561 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.825570 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.825575 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.825582 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.825587 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.825672 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.825681 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.825689 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.825700 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.825708 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.825717 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 17:48:01 crc kubenswrapper[4680]: E0217 17:48:01.825797 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.825803 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.827065 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b" gracePeriod=15 Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.827476 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433" gracePeriod=15 Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.827550 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22" gracePeriod=15 Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.827709 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa" gracePeriod=15 Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.827717 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc" gracePeriod=15 Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.832769 4680 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.862100 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.862152 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.862192 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.862223 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.862242 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.862267 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.862311 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.862381 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.864305 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.963283 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.963431 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.963427 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.963470 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.963537 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.963669 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.963693 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.963746 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.964033 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.964062 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.964070 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.964098 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.964128 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.964197 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.964207 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:01 crc kubenswrapper[4680]: I0217 17:48:01.964239 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:02 crc kubenswrapper[4680]: I0217 17:48:02.013415 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 17:48:02 crc kubenswrapper[4680]: I0217 17:48:02.014802 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 17:48:02 crc kubenswrapper[4680]: I0217 17:48:02.015361 4680 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433" exitCode=0 Feb 17 17:48:02 crc kubenswrapper[4680]: I0217 17:48:02.015381 4680 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa" exitCode=0 Feb 17 17:48:02 crc kubenswrapper[4680]: I0217 17:48:02.015388 4680 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc" exitCode=0 Feb 17 17:48:02 crc kubenswrapper[4680]: I0217 17:48:02.015395 4680 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22" exitCode=2 Feb 17 17:48:02 crc kubenswrapper[4680]: I0217 17:48:02.015428 4680 scope.go:117] "RemoveContainer" containerID="eb08ed39701a829054ad108630655fb7a6b1efdc90fe6f06671162ad205f1170" Feb 17 17:48:02 crc kubenswrapper[4680]: I0217 17:48:02.166922 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:02 crc kubenswrapper[4680]: E0217 17:48:02.189668 4680 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.180:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189519e0e78df19d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 17:48:02.188366237 +0000 UTC m=+211.739264916,LastTimestamp:2026-02-17 17:48:02.188366237 +0000 UTC m=+211.739264916,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 17:48:03 crc kubenswrapper[4680]: I0217 17:48:03.027594 4680 generic.go:334] "Generic (PLEG): container finished" podID="95c5ebe9-31a5-487f-a8a9-923d79843be0" containerID="3475cae537a9c8781391bb5a87e2d60d9721ee10855b50b8a01f27a14970fafe" exitCode=0 Feb 17 17:48:03 crc kubenswrapper[4680]: I0217 17:48:03.027692 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"95c5ebe9-31a5-487f-a8a9-923d79843be0","Type":"ContainerDied","Data":"3475cae537a9c8781391bb5a87e2d60d9721ee10855b50b8a01f27a14970fafe"} Feb 17 17:48:03 crc kubenswrapper[4680]: I0217 17:48:03.028648 4680 status_manager.go:851] "Failed to get status for pod" podUID="95c5ebe9-31a5-487f-a8a9-923d79843be0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:03 crc kubenswrapper[4680]: I0217 17:48:03.028844 4680 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:03 crc kubenswrapper[4680]: I0217 17:48:03.030860 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"459e9c94246b4e3258c1f64f6bc4908a463692d511a1e69a3ba3ad59411439af"} Feb 17 17:48:03 crc kubenswrapper[4680]: I0217 17:48:03.030977 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3a885420a52dd6ed205e597ebf673243ec9cce1019a3091c3157184f1a8050c6"} Feb 17 17:48:03 crc kubenswrapper[4680]: I0217 17:48:03.031347 4680 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:03 crc kubenswrapper[4680]: I0217 17:48:03.032519 4680 status_manager.go:851] "Failed to get status for pod" podUID="95c5ebe9-31a5-487f-a8a9-923d79843be0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:03 crc kubenswrapper[4680]: I0217 17:48:03.034027 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.277077 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.278906 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.279509 4680 status_manager.go:851] "Failed to get status for pod" podUID="95c5ebe9-31a5-487f-a8a9-923d79843be0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.279859 4680 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.280105 4680 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.281609 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.281934 4680 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.282163 4680 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.282341 4680 status_manager.go:851] "Failed to get status for pod" podUID="95c5ebe9-31a5-487f-a8a9-923d79843be0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.428559 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.428645 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.428677 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95c5ebe9-31a5-487f-a8a9-923d79843be0-kube-api-access\") pod \"95c5ebe9-31a5-487f-a8a9-923d79843be0\" (UID: \"95c5ebe9-31a5-487f-a8a9-923d79843be0\") " Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.428722 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95c5ebe9-31a5-487f-a8a9-923d79843be0-kubelet-dir\") pod \"95c5ebe9-31a5-487f-a8a9-923d79843be0\" (UID: \"95c5ebe9-31a5-487f-a8a9-923d79843be0\") " Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.428771 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.428812 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95c5ebe9-31a5-487f-a8a9-923d79843be0-var-lock\") pod \"95c5ebe9-31a5-487f-a8a9-923d79843be0\" (UID: \"95c5ebe9-31a5-487f-a8a9-923d79843be0\") " Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.428826 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.428912 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.428956 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95c5ebe9-31a5-487f-a8a9-923d79843be0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "95c5ebe9-31a5-487f-a8a9-923d79843be0" (UID: "95c5ebe9-31a5-487f-a8a9-923d79843be0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.429017 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95c5ebe9-31a5-487f-a8a9-923d79843be0-var-lock" (OuterVolumeSpecName: "var-lock") pod "95c5ebe9-31a5-487f-a8a9-923d79843be0" (UID: "95c5ebe9-31a5-487f-a8a9-923d79843be0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.429044 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.429305 4680 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95c5ebe9-31a5-487f-a8a9-923d79843be0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.429537 4680 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.429556 4680 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95c5ebe9-31a5-487f-a8a9-923d79843be0-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.429574 4680 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.429590 4680 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.436700 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c5ebe9-31a5-487f-a8a9-923d79843be0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "95c5ebe9-31a5-487f-a8a9-923d79843be0" (UID: "95c5ebe9-31a5-487f-a8a9-923d79843be0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:48:04 crc kubenswrapper[4680]: I0217 17:48:04.530450 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95c5ebe9-31a5-487f-a8a9-923d79843be0-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.046188 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.046185 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"95c5ebe9-31a5-487f-a8a9-923d79843be0","Type":"ContainerDied","Data":"d796506c029662e6007964aac2067f88a93f892bb4cd670c96eb6084f56b7c31"} Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.046513 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d796506c029662e6007964aac2067f88a93f892bb4cd670c96eb6084f56b7c31" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.049648 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.050690 4680 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b" exitCode=0 Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.050751 4680 scope.go:117] "RemoveContainer" containerID="19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.050816 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.062274 4680 status_manager.go:851] "Failed to get status for pod" podUID="95c5ebe9-31a5-487f-a8a9-923d79843be0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.062882 4680 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.063099 4680 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.071627 4680 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.071973 4680 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.072201 4680 status_manager.go:851] "Failed to get status for pod" podUID="95c5ebe9-31a5-487f-a8a9-923d79843be0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.076520 4680 scope.go:117] "RemoveContainer" containerID="6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.093051 4680 scope.go:117] "RemoveContainer" containerID="f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.110558 4680 scope.go:117] "RemoveContainer" containerID="9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.117014 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.128532 4680 scope.go:117] "RemoveContainer" containerID="e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.149562 4680 scope.go:117] "RemoveContainer" containerID="81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.175817 4680 scope.go:117] "RemoveContainer" containerID="19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433" Feb 17 17:48:05 crc kubenswrapper[4680]: E0217 17:48:05.176351 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\": container with ID starting with 19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433 not found: ID does not exist" containerID="19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.176394 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433"} err="failed to get container status \"19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\": rpc error: code = NotFound desc = could not find container \"19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433\": container with ID starting with 19efdefe792dba0401ab61b5e1c88dc6aaff0607aa500a307b8fd57f384b9433 not found: ID does not exist" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.176423 4680 scope.go:117] "RemoveContainer" containerID="6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa" Feb 17 17:48:05 crc kubenswrapper[4680]: E0217 17:48:05.176687 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\": container with ID starting with 6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa not found: ID does not exist" containerID="6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.176718 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa"} err="failed to get container status \"6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\": rpc error: code = NotFound desc = could not find container \"6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa\": container with ID starting with 6b4003e9bd5ae725889ef4e2d86d55f07a0d20901906ee3c11f1af6d2910a7fa not found: ID does not exist" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.176737 4680 scope.go:117] "RemoveContainer" containerID="f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc" Feb 17 17:48:05 crc kubenswrapper[4680]: E0217 17:48:05.180212 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\": container with ID starting with f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc not found: ID does not exist" containerID="f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.180246 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc"} err="failed to get container status \"f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\": rpc error: code = NotFound desc = could not find container \"f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc\": container with ID starting with f143298be072094b0d47809ceebe549d304b8e1c247c9a2c478d94e8fb561ebc not found: ID does not exist" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.180267 4680 scope.go:117] "RemoveContainer" containerID="9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22" Feb 17 17:48:05 crc kubenswrapper[4680]: E0217 17:48:05.180697 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\": container with ID starting with 9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22 not found: ID does not exist" containerID="9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.180726 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22"} err="failed to get container status \"9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\": rpc error: code = NotFound desc = could not find container \"9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22\": container with ID starting with 9f59da891eb8ad5af600dc607fc809d80946d9335d65e1fbf3b18c815ae13f22 not found: ID does not exist" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.180743 4680 scope.go:117] "RemoveContainer" containerID="e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b" Feb 17 17:48:05 crc kubenswrapper[4680]: E0217 17:48:05.181230 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\": container with ID starting with e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b not found: ID does not exist" containerID="e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.181267 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b"} err="failed to get container status \"e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\": rpc error: code = NotFound desc = could not find container \"e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b\": container with ID starting with e31a6b51ffd8f5e0dba44d16788a5a4e86634359467c0dcb5327a637999c9e0b not found: ID does not exist" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.181289 4680 scope.go:117] "RemoveContainer" containerID="81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519" Feb 17 17:48:05 crc kubenswrapper[4680]: E0217 17:48:05.183080 4680 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.180:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" volumeName="registry-storage" Feb 17 17:48:05 crc kubenswrapper[4680]: E0217 17:48:05.183650 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\": container with ID starting with 81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519 not found: ID does not exist" containerID="81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.183754 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519"} err="failed to get container status \"81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\": rpc error: code = NotFound desc = could not find container \"81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519\": container with ID starting with 81637aa2fb60df2b873d1febd029ac0ce9c8de19f9c3b9ec1c9aad36d048b519 not found: ID does not exist" Feb 17 17:48:05 crc kubenswrapper[4680]: E0217 17:48:05.390620 4680 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.180:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189519e0e78df19d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 17:48:02.188366237 +0000 UTC m=+211.739264916,LastTimestamp:2026-02-17 17:48:02.188366237 +0000 UTC m=+211.739264916,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.588875 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.588978 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.589025 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.589639 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:48:05 crc kubenswrapper[4680]: I0217 17:48:05.589695 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6" gracePeriod=600 Feb 17 17:48:06 crc kubenswrapper[4680]: I0217 17:48:06.058656 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6" exitCode=0 Feb 17 17:48:06 crc kubenswrapper[4680]: I0217 17:48:06.058858 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6"} Feb 17 17:48:06 crc kubenswrapper[4680]: I0217 17:48:06.058922 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"edd5ddf683c1c79e1cf067d82d5193107707475d2e8243197b201d5f7d01d27d"} Feb 17 17:48:06 crc kubenswrapper[4680]: I0217 17:48:06.059838 4680 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:06 crc kubenswrapper[4680]: I0217 17:48:06.060236 4680 status_manager.go:851] "Failed to get status for pod" podUID="8dd08058-016e-4607-8be6-40463674c2fe" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-rfw56\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:06 crc kubenswrapper[4680]: I0217 17:48:06.060708 4680 status_manager.go:851] "Failed to get status for pod" podUID="95c5ebe9-31a5-487f-a8a9-923d79843be0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:10 crc kubenswrapper[4680]: E0217 17:48:10.892329 4680 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:10 crc kubenswrapper[4680]: E0217 17:48:10.893244 4680 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:10 crc kubenswrapper[4680]: E0217 17:48:10.893588 4680 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:10 crc kubenswrapper[4680]: E0217 17:48:10.893940 4680 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:10 crc kubenswrapper[4680]: E0217 17:48:10.894210 4680 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:10 crc kubenswrapper[4680]: I0217 17:48:10.894231 4680 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 17:48:10 crc kubenswrapper[4680]: E0217 17:48:10.894471 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="200ms" Feb 17 17:48:11 crc kubenswrapper[4680]: E0217 17:48:11.095257 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="400ms" Feb 17 17:48:11 crc kubenswrapper[4680]: I0217 17:48:11.107222 4680 status_manager.go:851] "Failed to get status for pod" podUID="8dd08058-016e-4607-8be6-40463674c2fe" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-rfw56\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:11 crc kubenswrapper[4680]: I0217 17:48:11.107541 4680 status_manager.go:851] "Failed to get status for pod" podUID="95c5ebe9-31a5-487f-a8a9-923d79843be0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:11 crc kubenswrapper[4680]: I0217 17:48:11.107804 4680 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:11 crc kubenswrapper[4680]: E0217 17:48:11.495845 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="800ms" Feb 17 17:48:12 crc kubenswrapper[4680]: E0217 17:48:12.296539 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="1.6s" Feb 17 17:48:13 crc kubenswrapper[4680]: E0217 17:48:13.897720 4680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="3.2s" Feb 17 17:48:14 crc kubenswrapper[4680]: I0217 17:48:14.099189 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 17:48:14 crc kubenswrapper[4680]: I0217 17:48:14.099547 4680 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2" exitCode=1 Feb 17 17:48:14 crc kubenswrapper[4680]: I0217 17:48:14.099579 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2"} Feb 17 17:48:14 crc kubenswrapper[4680]: I0217 17:48:14.100045 4680 scope.go:117] "RemoveContainer" containerID="c8e28d47e0e65af62c475a117f528b368bb0999e9d09b0cd21037b710254bdd2" Feb 17 17:48:14 crc kubenswrapper[4680]: I0217 17:48:14.100310 4680 status_manager.go:851] "Failed to get status for pod" podUID="8dd08058-016e-4607-8be6-40463674c2fe" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-rfw56\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:14 crc kubenswrapper[4680]: I0217 17:48:14.100781 4680 status_manager.go:851] "Failed to get status for pod" podUID="95c5ebe9-31a5-487f-a8a9-923d79843be0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:14 crc kubenswrapper[4680]: I0217 17:48:14.101060 4680 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:14 crc kubenswrapper[4680]: I0217 17:48:14.101389 4680 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:14 crc kubenswrapper[4680]: I0217 17:48:14.104387 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:14 crc kubenswrapper[4680]: I0217 17:48:14.105070 4680 status_manager.go:851] "Failed to get status for pod" podUID="8dd08058-016e-4607-8be6-40463674c2fe" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-rfw56\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:14 crc kubenswrapper[4680]: I0217 17:48:14.105491 4680 status_manager.go:851] "Failed to get status for pod" podUID="95c5ebe9-31a5-487f-a8a9-923d79843be0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:14 crc kubenswrapper[4680]: I0217 17:48:14.105752 4680 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:14 crc kubenswrapper[4680]: I0217 17:48:14.106003 4680 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:14 crc kubenswrapper[4680]: I0217 17:48:14.119580 4680 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e5e24bd-d23b-4c8f-ba86-76e64afcd012" Feb 17 17:48:14 crc kubenswrapper[4680]: I0217 17:48:14.119624 4680 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e5e24bd-d23b-4c8f-ba86-76e64afcd012" Feb 17 17:48:14 crc kubenswrapper[4680]: E0217 17:48:14.120099 4680 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:14 crc kubenswrapper[4680]: I0217 17:48:14.120653 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:14 crc kubenswrapper[4680]: W0217 17:48:14.143340 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-afa68204b2193ba4e3f025026edd30551d696afce7db657a4bf6b2faeacfeaa7 WatchSource:0}: Error finding container afa68204b2193ba4e3f025026edd30551d696afce7db657a4bf6b2faeacfeaa7: Status 404 returned error can't find the container with id afa68204b2193ba4e3f025026edd30551d696afce7db657a4bf6b2faeacfeaa7 Feb 17 17:48:15 crc kubenswrapper[4680]: I0217 17:48:15.105941 4680 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="dec004e194151f00abf7dee66aa9c4cb8ca341d3743e56fcd586065f274064ae" exitCode=0 Feb 17 17:48:15 crc kubenswrapper[4680]: I0217 17:48:15.109153 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 17:48:15 crc kubenswrapper[4680]: I0217 17:48:15.110535 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"dec004e194151f00abf7dee66aa9c4cb8ca341d3743e56fcd586065f274064ae"} Feb 17 17:48:15 crc kubenswrapper[4680]: I0217 17:48:15.110569 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"afa68204b2193ba4e3f025026edd30551d696afce7db657a4bf6b2faeacfeaa7"} Feb 17 17:48:15 crc kubenswrapper[4680]: I0217 17:48:15.110580 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"488f3fe038779b1ff6e5936845cf4e51c05811acb9631f2798e8c530a97b9bfe"} Feb 17 17:48:15 crc kubenswrapper[4680]: I0217 17:48:15.110944 4680 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e5e24bd-d23b-4c8f-ba86-76e64afcd012" Feb 17 17:48:15 crc kubenswrapper[4680]: I0217 17:48:15.110974 4680 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e5e24bd-d23b-4c8f-ba86-76e64afcd012" Feb 17 17:48:15 crc kubenswrapper[4680]: I0217 17:48:15.111223 4680 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:15 crc kubenswrapper[4680]: E0217 17:48:15.111378 4680 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:15 crc kubenswrapper[4680]: I0217 17:48:15.111768 4680 status_manager.go:851] "Failed to get status for pod" podUID="8dd08058-016e-4607-8be6-40463674c2fe" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-rfw56\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:15 crc kubenswrapper[4680]: I0217 17:48:15.112208 4680 status_manager.go:851] "Failed to get status for pod" podUID="95c5ebe9-31a5-487f-a8a9-923d79843be0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:15 crc kubenswrapper[4680]: I0217 17:48:15.112454 4680 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:15 crc kubenswrapper[4680]: I0217 17:48:15.112734 4680 status_manager.go:851] "Failed to get status for pod" podUID="95c5ebe9-31a5-487f-a8a9-923d79843be0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:15 crc kubenswrapper[4680]: I0217 17:48:15.112944 4680 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:15 crc kubenswrapper[4680]: I0217 17:48:15.113146 4680 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:15 crc kubenswrapper[4680]: I0217 17:48:15.113386 4680 status_manager.go:851] "Failed to get status for pod" podUID="8dd08058-016e-4607-8be6-40463674c2fe" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-rfw56\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 17 17:48:15 crc kubenswrapper[4680]: E0217 17:48:15.392761 4680 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.180:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189519e0e78df19d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 17:48:02.188366237 +0000 UTC m=+211.739264916,LastTimestamp:2026-02-17 17:48:02.188366237 +0000 UTC m=+211.739264916,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 17:48:16 crc kubenswrapper[4680]: I0217 17:48:16.117561 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cb2afc710a34c82048f976091c70f3cdb5c805dcbbdcfb0b3f7da410a5bbe261"} Feb 17 17:48:16 crc kubenswrapper[4680]: I0217 17:48:16.117631 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"126eab412a90e82e0730092478665df8462c44cacb6b33e9f046ca50c66aeb0d"} Feb 17 17:48:16 crc kubenswrapper[4680]: I0217 17:48:16.117661 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1aa8ff30bda2153ce5e14c7a132c2819bf0aba3be72c756062bdd7057c0371b3"} Feb 17 17:48:16 crc kubenswrapper[4680]: I0217 17:48:16.117675 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"622ccc47bfcc02e84b2fe805b02ead19d11a610d26376dc8a956537ccd622f88"} Feb 17 17:48:16 crc kubenswrapper[4680]: I0217 17:48:16.117688 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"705c83e363fc903f02de8d6ed308f88a8106cffc2f4151b0eff7b71ef4bb92e5"} Feb 17 17:48:16 crc kubenswrapper[4680]: I0217 17:48:16.117970 4680 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e5e24bd-d23b-4c8f-ba86-76e64afcd012" Feb 17 17:48:16 crc kubenswrapper[4680]: I0217 17:48:16.117987 4680 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e5e24bd-d23b-4c8f-ba86-76e64afcd012" Feb 17 17:48:16 crc kubenswrapper[4680]: I0217 17:48:16.118389 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:17 crc kubenswrapper[4680]: I0217 17:48:17.360708 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:48:19 crc kubenswrapper[4680]: I0217 17:48:19.121788 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:19 crc kubenswrapper[4680]: I0217 17:48:19.121845 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:19 crc kubenswrapper[4680]: I0217 17:48:19.128115 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:21 crc kubenswrapper[4680]: I0217 17:48:21.721803 4680 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:21 crc kubenswrapper[4680]: I0217 17:48:21.768472 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:48:21 crc kubenswrapper[4680]: I0217 17:48:21.773408 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:48:21 crc kubenswrapper[4680]: I0217 17:48:21.854756 4680 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6097ea79-3027-4566-a2e3-271fcca502e0" Feb 17 17:48:22 crc kubenswrapper[4680]: I0217 17:48:22.145769 4680 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e5e24bd-d23b-4c8f-ba86-76e64afcd012" Feb 17 17:48:22 crc kubenswrapper[4680]: I0217 17:48:22.145802 4680 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e5e24bd-d23b-4c8f-ba86-76e64afcd012" Feb 17 17:48:22 crc kubenswrapper[4680]: I0217 17:48:22.150185 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:22 crc kubenswrapper[4680]: I0217 17:48:22.151535 4680 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6097ea79-3027-4566-a2e3-271fcca502e0" Feb 17 17:48:23 crc kubenswrapper[4680]: I0217 17:48:23.152000 4680 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e5e24bd-d23b-4c8f-ba86-76e64afcd012" Feb 17 17:48:23 crc kubenswrapper[4680]: I0217 17:48:23.152036 4680 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5e5e24bd-d23b-4c8f-ba86-76e64afcd012" Feb 17 17:48:23 crc kubenswrapper[4680]: I0217 17:48:23.156145 4680 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6097ea79-3027-4566-a2e3-271fcca502e0" Feb 17 17:48:27 crc kubenswrapper[4680]: I0217 17:48:27.367666 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 17:48:32 crc kubenswrapper[4680]: I0217 17:48:32.971001 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 17:48:32 crc kubenswrapper[4680]: I0217 17:48:32.987685 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 17:48:32 crc kubenswrapper[4680]: I0217 17:48:32.988538 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 17:48:32 crc kubenswrapper[4680]: I0217 17:48:32.989732 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 17:48:33 crc kubenswrapper[4680]: I0217 17:48:33.096000 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 17:48:33 crc kubenswrapper[4680]: I0217 17:48:33.341401 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 17:48:33 crc kubenswrapper[4680]: I0217 17:48:33.342061 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 17:48:33 crc kubenswrapper[4680]: I0217 17:48:33.346474 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 17:48:33 crc kubenswrapper[4680]: I0217 17:48:33.482261 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 17:48:33 crc kubenswrapper[4680]: I0217 17:48:33.660073 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 17:48:33 crc kubenswrapper[4680]: I0217 17:48:33.788247 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 17:48:33 crc kubenswrapper[4680]: I0217 17:48:33.817736 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 17:48:33 crc kubenswrapper[4680]: I0217 17:48:33.926762 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 17:48:34 crc kubenswrapper[4680]: I0217 17:48:34.100091 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 17:48:34 crc kubenswrapper[4680]: I0217 17:48:34.101637 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 17:48:34 crc kubenswrapper[4680]: I0217 17:48:34.122628 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 17:48:34 crc kubenswrapper[4680]: I0217 17:48:34.282600 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 17:48:34 crc kubenswrapper[4680]: I0217 17:48:34.316409 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 17:48:34 crc kubenswrapper[4680]: I0217 17:48:34.335386 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 17:48:34 crc kubenswrapper[4680]: I0217 17:48:34.337992 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 17:48:34 crc kubenswrapper[4680]: I0217 17:48:34.442602 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 17:48:34 crc kubenswrapper[4680]: I0217 17:48:34.447179 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 17:48:34 crc kubenswrapper[4680]: I0217 17:48:34.696697 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 17:48:34 crc kubenswrapper[4680]: I0217 17:48:34.789888 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 17:48:34 crc kubenswrapper[4680]: I0217 17:48:34.864582 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 17:48:34 crc kubenswrapper[4680]: I0217 17:48:34.989834 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 17:48:35 crc kubenswrapper[4680]: I0217 17:48:35.181194 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 17:48:35 crc kubenswrapper[4680]: I0217 17:48:35.216164 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 17:48:35 crc kubenswrapper[4680]: I0217 17:48:35.296949 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 17:48:35 crc kubenswrapper[4680]: I0217 17:48:35.314198 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 17:48:35 crc kubenswrapper[4680]: I0217 17:48:35.339052 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 17:48:35 crc kubenswrapper[4680]: I0217 17:48:35.344532 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 17:48:35 crc kubenswrapper[4680]: I0217 17:48:35.386595 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 17:48:35 crc kubenswrapper[4680]: I0217 17:48:35.598335 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 17:48:35 crc kubenswrapper[4680]: I0217 17:48:35.766070 4680 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 17:48:35 crc kubenswrapper[4680]: I0217 17:48:35.852605 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 17:48:36 crc kubenswrapper[4680]: I0217 17:48:36.261994 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 17:48:36 crc kubenswrapper[4680]: I0217 17:48:36.341369 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 17:48:36 crc kubenswrapper[4680]: I0217 17:48:36.400277 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 17:48:36 crc kubenswrapper[4680]: I0217 17:48:36.628613 4680 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 17:48:36 crc kubenswrapper[4680]: I0217 17:48:36.814542 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 17:48:36 crc kubenswrapper[4680]: I0217 17:48:36.824507 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 17:48:36 crc kubenswrapper[4680]: I0217 17:48:36.880105 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 17:48:36 crc kubenswrapper[4680]: I0217 17:48:36.913008 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 17:48:37 crc kubenswrapper[4680]: I0217 17:48:37.025184 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 17:48:37 crc kubenswrapper[4680]: I0217 17:48:37.097977 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 17:48:37 crc kubenswrapper[4680]: I0217 17:48:37.191308 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 17:48:37 crc kubenswrapper[4680]: I0217 17:48:37.325123 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 17:48:37 crc kubenswrapper[4680]: I0217 17:48:37.407116 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 17:48:37 crc kubenswrapper[4680]: I0217 17:48:37.407740 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 17:48:37 crc kubenswrapper[4680]: I0217 17:48:37.412984 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 17:48:37 crc kubenswrapper[4680]: I0217 17:48:37.431871 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 17:48:37 crc kubenswrapper[4680]: I0217 17:48:37.447925 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 17:48:37 crc kubenswrapper[4680]: I0217 17:48:37.475991 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 17:48:37 crc kubenswrapper[4680]: I0217 17:48:37.500412 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 17:48:37 crc kubenswrapper[4680]: I0217 17:48:37.626133 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 17:48:37 crc kubenswrapper[4680]: I0217 17:48:37.731937 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 17:48:37 crc kubenswrapper[4680]: I0217 17:48:37.812799 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 17:48:37 crc kubenswrapper[4680]: I0217 17:48:37.872631 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 17:48:37 crc kubenswrapper[4680]: I0217 17:48:37.887467 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 17:48:37 crc kubenswrapper[4680]: I0217 17:48:37.906401 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 17:48:38 crc kubenswrapper[4680]: I0217 17:48:38.423663 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 17:48:38 crc kubenswrapper[4680]: I0217 17:48:38.426935 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 17:48:38 crc kubenswrapper[4680]: I0217 17:48:38.427222 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 17:48:38 crc kubenswrapper[4680]: I0217 17:48:38.427438 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 17:48:38 crc kubenswrapper[4680]: I0217 17:48:38.436561 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 17:48:38 crc kubenswrapper[4680]: I0217 17:48:38.446385 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 17:48:38 crc kubenswrapper[4680]: I0217 17:48:38.466953 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 17:48:38 crc kubenswrapper[4680]: I0217 17:48:38.533515 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 17:48:38 crc kubenswrapper[4680]: I0217 17:48:38.553818 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 17:48:38 crc kubenswrapper[4680]: I0217 17:48:38.564850 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 17:48:38 crc kubenswrapper[4680]: I0217 17:48:38.601148 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 17:48:38 crc kubenswrapper[4680]: I0217 17:48:38.717600 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 17:48:38 crc kubenswrapper[4680]: I0217 17:48:38.758702 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 17:48:38 crc kubenswrapper[4680]: I0217 17:48:38.805860 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 17:48:38 crc kubenswrapper[4680]: I0217 17:48:38.806968 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 17:48:38 crc kubenswrapper[4680]: I0217 17:48:38.840069 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 17:48:38 crc kubenswrapper[4680]: I0217 17:48:38.981625 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 17:48:39 crc kubenswrapper[4680]: I0217 17:48:39.044573 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 17:48:39 crc kubenswrapper[4680]: I0217 17:48:39.054750 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 17:48:39 crc kubenswrapper[4680]: I0217 17:48:39.119971 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 17:48:39 crc kubenswrapper[4680]: I0217 17:48:39.137058 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 17:48:39 crc kubenswrapper[4680]: I0217 17:48:39.224212 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 17:48:39 crc kubenswrapper[4680]: I0217 17:48:39.245942 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 17:48:39 crc kubenswrapper[4680]: I0217 17:48:39.277209 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 17:48:39 crc kubenswrapper[4680]: I0217 17:48:39.316832 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 17:48:39 crc kubenswrapper[4680]: I0217 17:48:39.325590 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 17:48:39 crc kubenswrapper[4680]: I0217 17:48:39.537138 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 17:48:39 crc kubenswrapper[4680]: I0217 17:48:39.562459 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 17:48:39 crc kubenswrapper[4680]: I0217 17:48:39.585584 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.437464 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.438377 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.438636 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.438784 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.438985 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.444114 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.444369 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.444630 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.444742 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.444853 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.444977 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.445146 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.445260 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.445373 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.445477 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.449592 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.475552 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.528200 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.546487 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.683920 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.724494 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.753362 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 17:48:40 crc kubenswrapper[4680]: I0217 17:48:40.802743 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.333086 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.333254 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.343710 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.344489 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.362336 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.364442 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.364944 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.365179 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.489346 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.495362 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.546553 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.591226 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.633070 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.713419 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.748663 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.797926 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.859954 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.940568 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.943223 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 17:48:41 crc kubenswrapper[4680]: I0217 17:48:41.961769 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.061661 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.072914 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.116140 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.171763 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.171919 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.262340 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.335860 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.347583 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.348363 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.374868 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.518343 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.546539 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.589977 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.674602 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.678113 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.706536 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.772580 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.832188 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.883906 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 17:48:42 crc kubenswrapper[4680]: I0217 17:48:42.972531 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.059605 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.089493 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.100662 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.140851 4680 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.142146 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.143667 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=42.143646873 podStartE2EDuration="42.143646873s" podCreationTimestamp="2026-02-17 17:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:48:21.810915905 +0000 UTC m=+231.361814594" watchObservedRunningTime="2026-02-17 17:48:43.143646873 +0000 UTC m=+252.694545552" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.145377 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.145426 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.149881 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.165502 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.165476926 podStartE2EDuration="22.165476926s" podCreationTimestamp="2026-02-17 17:48:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:48:43.164752804 +0000 UTC m=+252.715651483" watchObservedRunningTime="2026-02-17 17:48:43.165476926 +0000 UTC m=+252.716375605" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.284779 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.299309 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.333595 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.350213 4680 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.360283 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.377393 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.380638 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.445002 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.533892 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.673393 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.707144 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.759137 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.794863 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.817854 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.832013 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.839300 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.843596 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.882583 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 17:48:43 crc kubenswrapper[4680]: I0217 17:48:43.984222 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.016709 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.023452 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.157001 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.335217 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.424458 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.439430 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.450091 4680 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.450357 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://459e9c94246b4e3258c1f64f6bc4908a463692d511a1e69a3ba3ad59411439af" gracePeriod=5 Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.474496 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.605719 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.637774 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.666370 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.671408 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.768347 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.819089 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.870262 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.893596 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 17:48:44 crc kubenswrapper[4680]: I0217 17:48:44.896865 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 17:48:45 crc kubenswrapper[4680]: I0217 17:48:45.030262 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 17:48:45 crc kubenswrapper[4680]: I0217 17:48:45.079246 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 17:48:45 crc kubenswrapper[4680]: I0217 17:48:45.112552 4680 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 17:48:45 crc kubenswrapper[4680]: I0217 17:48:45.238198 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 17:48:45 crc kubenswrapper[4680]: I0217 17:48:45.334431 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 17:48:45 crc kubenswrapper[4680]: I0217 17:48:45.434672 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 17:48:45 crc kubenswrapper[4680]: I0217 17:48:45.436001 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 17:48:45 crc kubenswrapper[4680]: I0217 17:48:45.456593 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 17:48:45 crc kubenswrapper[4680]: I0217 17:48:45.538361 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 17:48:45 crc kubenswrapper[4680]: I0217 17:48:45.612643 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 17:48:45 crc kubenswrapper[4680]: I0217 17:48:45.616704 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 17:48:45 crc kubenswrapper[4680]: I0217 17:48:45.679870 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 17:48:45 crc kubenswrapper[4680]: I0217 17:48:45.726420 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 17:48:45 crc kubenswrapper[4680]: I0217 17:48:45.864113 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 17:48:45 crc kubenswrapper[4680]: I0217 17:48:45.874922 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 17:48:45 crc kubenswrapper[4680]: I0217 17:48:45.879435 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 17:48:46 crc kubenswrapper[4680]: I0217 17:48:46.025918 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 17:48:46 crc kubenswrapper[4680]: I0217 17:48:46.036432 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 17:48:46 crc kubenswrapper[4680]: I0217 17:48:46.105702 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 17:48:46 crc kubenswrapper[4680]: I0217 17:48:46.142406 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 17:48:46 crc kubenswrapper[4680]: I0217 17:48:46.189159 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 17:48:46 crc kubenswrapper[4680]: I0217 17:48:46.306922 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 17:48:46 crc kubenswrapper[4680]: I0217 17:48:46.453360 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 17:48:46 crc kubenswrapper[4680]: I0217 17:48:46.494612 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 17:48:46 crc kubenswrapper[4680]: I0217 17:48:46.600986 4680 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 17:48:46 crc kubenswrapper[4680]: I0217 17:48:46.649703 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 17:48:46 crc kubenswrapper[4680]: I0217 17:48:46.649799 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 17:48:46 crc kubenswrapper[4680]: I0217 17:48:46.754272 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 17:48:46 crc kubenswrapper[4680]: I0217 17:48:46.876167 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 17:48:46 crc kubenswrapper[4680]: I0217 17:48:46.940029 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 17:48:46 crc kubenswrapper[4680]: I0217 17:48:46.968696 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.007579 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.070940 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.200162 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.238908 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.288321 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qdnhz"] Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.288632 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qdnhz" podUID="a08188de-38b5-4e41-b12d-cf5a1cf488ec" containerName="registry-server" containerID="cri-o://52b5f59888665260200b06dfd5b0a8020169e76df420785a5dc5a0dc18bc0d62" gracePeriod=30 Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.305563 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6hgx6"] Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.305789 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6hgx6" podUID="6ea8ef6d-a9c6-477e-89be-fa625c176d42" containerName="registry-server" containerID="cri-o://54c38411af0a2d57b4d25f55728f6a49700d1bad0f59388cbee84440b77634d2" gracePeriod=30 Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.312095 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.322435 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rjbs8"] Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.322655 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" podUID="17edf562-a2eb-4263-bbce-c65013fd645f" containerName="marketplace-operator" containerID="cri-o://2c5b8edbc9581855f5f2f47cd62168294d0896b869a323bbf844265467f2b9ee" gracePeriod=30 Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.334717 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkz99"] Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.335030 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kkz99" podUID="de1f714a-f726-49f5-b611-3b0690ed7245" containerName="registry-server" containerID="cri-o://b6b777c84136f51c7b5d52ffffe2fb82e6ba8e3edde05c570308a0ceb0ad470c" gracePeriod=30 Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.343965 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pdls7"] Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.344405 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pdls7" podUID="736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2" containerName="registry-server" containerID="cri-o://4f639cdd2095a72a6ca92aed5a93b145ffa085c045290fda00ea10e25a4ac225" gracePeriod=30 Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.375078 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-slcqr"] Feb 17 17:48:47 crc kubenswrapper[4680]: E0217 17:48:47.375330 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.375345 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 17:48:47 crc kubenswrapper[4680]: E0217 17:48:47.375389 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c5ebe9-31a5-487f-a8a9-923d79843be0" containerName="installer" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.375397 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c5ebe9-31a5-487f-a8a9-923d79843be0" containerName="installer" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.375533 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.375554 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c5ebe9-31a5-487f-a8a9-923d79843be0" containerName="installer" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.375979 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-slcqr" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.405099 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-slcqr\" (UID: \"b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a\") " pod="openshift-marketplace/marketplace-operator-79b997595-slcqr" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.405546 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-slcqr\" (UID: \"b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a\") " pod="openshift-marketplace/marketplace-operator-79b997595-slcqr" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.405571 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcjxn\" (UniqueName: \"kubernetes.io/projected/b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a-kube-api-access-hcjxn\") pod \"marketplace-operator-79b997595-slcqr\" (UID: \"b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a\") " pod="openshift-marketplace/marketplace-operator-79b997595-slcqr" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.496986 4680 generic.go:334] "Generic (PLEG): container finished" podID="736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2" containerID="4f639cdd2095a72a6ca92aed5a93b145ffa085c045290fda00ea10e25a4ac225" exitCode=0 Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.497046 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdls7" event={"ID":"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2","Type":"ContainerDied","Data":"4f639cdd2095a72a6ca92aed5a93b145ffa085c045290fda00ea10e25a4ac225"} Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.499370 4680 generic.go:334] "Generic (PLEG): container finished" podID="17edf562-a2eb-4263-bbce-c65013fd645f" containerID="2c5b8edbc9581855f5f2f47cd62168294d0896b869a323bbf844265467f2b9ee" exitCode=0 Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.499413 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" event={"ID":"17edf562-a2eb-4263-bbce-c65013fd645f","Type":"ContainerDied","Data":"2c5b8edbc9581855f5f2f47cd62168294d0896b869a323bbf844265467f2b9ee"} Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.501402 4680 generic.go:334] "Generic (PLEG): container finished" podID="de1f714a-f726-49f5-b611-3b0690ed7245" containerID="b6b777c84136f51c7b5d52ffffe2fb82e6ba8e3edde05c570308a0ceb0ad470c" exitCode=0 Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.501442 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkz99" event={"ID":"de1f714a-f726-49f5-b611-3b0690ed7245","Type":"ContainerDied","Data":"b6b777c84136f51c7b5d52ffffe2fb82e6ba8e3edde05c570308a0ceb0ad470c"} Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.503885 4680 generic.go:334] "Generic (PLEG): container finished" podID="6ea8ef6d-a9c6-477e-89be-fa625c176d42" containerID="54c38411af0a2d57b4d25f55728f6a49700d1bad0f59388cbee84440b77634d2" exitCode=0 Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.503927 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hgx6" event={"ID":"6ea8ef6d-a9c6-477e-89be-fa625c176d42","Type":"ContainerDied","Data":"54c38411af0a2d57b4d25f55728f6a49700d1bad0f59388cbee84440b77634d2"} Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.505570 4680 generic.go:334] "Generic (PLEG): container finished" podID="a08188de-38b5-4e41-b12d-cf5a1cf488ec" containerID="52b5f59888665260200b06dfd5b0a8020169e76df420785a5dc5a0dc18bc0d62" exitCode=0 Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.505597 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdnhz" event={"ID":"a08188de-38b5-4e41-b12d-cf5a1cf488ec","Type":"ContainerDied","Data":"52b5f59888665260200b06dfd5b0a8020169e76df420785a5dc5a0dc18bc0d62"} Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.524811 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-slcqr\" (UID: \"b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a\") " pod="openshift-marketplace/marketplace-operator-79b997595-slcqr" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.524891 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-slcqr\" (UID: \"b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a\") " pod="openshift-marketplace/marketplace-operator-79b997595-slcqr" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.524919 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcjxn\" (UniqueName: \"kubernetes.io/projected/b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a-kube-api-access-hcjxn\") pod \"marketplace-operator-79b997595-slcqr\" (UID: \"b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a\") " pod="openshift-marketplace/marketplace-operator-79b997595-slcqr" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.527369 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-slcqr\" (UID: \"b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a\") " pod="openshift-marketplace/marketplace-operator-79b997595-slcqr" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.538894 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-slcqr\" (UID: \"b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a\") " pod="openshift-marketplace/marketplace-operator-79b997595-slcqr" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.545216 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcjxn\" (UniqueName: \"kubernetes.io/projected/b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a-kube-api-access-hcjxn\") pod \"marketplace-operator-79b997595-slcqr\" (UID: \"b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a\") " pod="openshift-marketplace/marketplace-operator-79b997595-slcqr" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.655075 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.704673 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.717250 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-slcqr" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.776872 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdnhz" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.902228 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hgx6" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.909810 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.930455 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw2jc\" (UniqueName: \"kubernetes.io/projected/a08188de-38b5-4e41-b12d-cf5a1cf488ec-kube-api-access-gw2jc\") pod \"a08188de-38b5-4e41-b12d-cf5a1cf488ec\" (UID: \"a08188de-38b5-4e41-b12d-cf5a1cf488ec\") " Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.930496 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08188de-38b5-4e41-b12d-cf5a1cf488ec-catalog-content\") pod \"a08188de-38b5-4e41-b12d-cf5a1cf488ec\" (UID: \"a08188de-38b5-4e41-b12d-cf5a1cf488ec\") " Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.930570 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08188de-38b5-4e41-b12d-cf5a1cf488ec-utilities\") pod \"a08188de-38b5-4e41-b12d-cf5a1cf488ec\" (UID: \"a08188de-38b5-4e41-b12d-cf5a1cf488ec\") " Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.931409 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a08188de-38b5-4e41-b12d-cf5a1cf488ec-utilities" (OuterVolumeSpecName: "utilities") pod "a08188de-38b5-4e41-b12d-cf5a1cf488ec" (UID: "a08188de-38b5-4e41-b12d-cf5a1cf488ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.936566 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a08188de-38b5-4e41-b12d-cf5a1cf488ec-kube-api-access-gw2jc" (OuterVolumeSpecName: "kube-api-access-gw2jc") pod "a08188de-38b5-4e41-b12d-cf5a1cf488ec" (UID: "a08188de-38b5-4e41-b12d-cf5a1cf488ec"). InnerVolumeSpecName "kube-api-access-gw2jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.936802 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pdls7" Feb 17 17:48:47 crc kubenswrapper[4680]: I0217 17:48:47.941045 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkz99" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.007377 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.014690 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a08188de-38b5-4e41-b12d-cf5a1cf488ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a08188de-38b5-4e41-b12d-cf5a1cf488ec" (UID: "a08188de-38b5-4e41-b12d-cf5a1cf488ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.032489 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfmq6\" (UniqueName: \"kubernetes.io/projected/6ea8ef6d-a9c6-477e-89be-fa625c176d42-kube-api-access-sfmq6\") pod \"6ea8ef6d-a9c6-477e-89be-fa625c176d42\" (UID: \"6ea8ef6d-a9c6-477e-89be-fa625c176d42\") " Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.032533 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ea8ef6d-a9c6-477e-89be-fa625c176d42-utilities\") pod \"6ea8ef6d-a9c6-477e-89be-fa625c176d42\" (UID: \"6ea8ef6d-a9c6-477e-89be-fa625c176d42\") " Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.032563 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1f714a-f726-49f5-b611-3b0690ed7245-catalog-content\") pod \"de1f714a-f726-49f5-b611-3b0690ed7245\" (UID: \"de1f714a-f726-49f5-b611-3b0690ed7245\") " Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.032581 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1f714a-f726-49f5-b611-3b0690ed7245-utilities\") pod \"de1f714a-f726-49f5-b611-3b0690ed7245\" (UID: \"de1f714a-f726-49f5-b611-3b0690ed7245\") " Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.032631 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-296x8\" (UniqueName: \"kubernetes.io/projected/de1f714a-f726-49f5-b611-3b0690ed7245-kube-api-access-296x8\") pod \"de1f714a-f726-49f5-b611-3b0690ed7245\" (UID: \"de1f714a-f726-49f5-b611-3b0690ed7245\") " Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.032648 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17edf562-a2eb-4263-bbce-c65013fd645f-marketplace-operator-metrics\") pod \"17edf562-a2eb-4263-bbce-c65013fd645f\" (UID: \"17edf562-a2eb-4263-bbce-c65013fd645f\") " Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.032670 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-catalog-content\") pod \"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2\" (UID: \"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2\") " Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.032710 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ea8ef6d-a9c6-477e-89be-fa625c176d42-catalog-content\") pod \"6ea8ef6d-a9c6-477e-89be-fa625c176d42\" (UID: \"6ea8ef6d-a9c6-477e-89be-fa625c176d42\") " Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.032731 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmpm4\" (UniqueName: \"kubernetes.io/projected/17edf562-a2eb-4263-bbce-c65013fd645f-kube-api-access-wmpm4\") pod \"17edf562-a2eb-4263-bbce-c65013fd645f\" (UID: \"17edf562-a2eb-4263-bbce-c65013fd645f\") " Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.032758 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-utilities\") pod \"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2\" (UID: \"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2\") " Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.032783 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17edf562-a2eb-4263-bbce-c65013fd645f-marketplace-trusted-ca\") pod \"17edf562-a2eb-4263-bbce-c65013fd645f\" (UID: \"17edf562-a2eb-4263-bbce-c65013fd645f\") " Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.032817 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgxdr\" (UniqueName: \"kubernetes.io/projected/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-kube-api-access-mgxdr\") pod \"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2\" (UID: \"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2\") " Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.033002 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw2jc\" (UniqueName: \"kubernetes.io/projected/a08188de-38b5-4e41-b12d-cf5a1cf488ec-kube-api-access-gw2jc\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.033014 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08188de-38b5-4e41-b12d-cf5a1cf488ec-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.033023 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08188de-38b5-4e41-b12d-cf5a1cf488ec-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.037629 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17edf562-a2eb-4263-bbce-c65013fd645f-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "17edf562-a2eb-4263-bbce-c65013fd645f" (UID: "17edf562-a2eb-4263-bbce-c65013fd645f"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.038483 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1f714a-f726-49f5-b611-3b0690ed7245-utilities" (OuterVolumeSpecName: "utilities") pod "de1f714a-f726-49f5-b611-3b0690ed7245" (UID: "de1f714a-f726-49f5-b611-3b0690ed7245"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.039094 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17edf562-a2eb-4263-bbce-c65013fd645f-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "17edf562-a2eb-4263-bbce-c65013fd645f" (UID: "17edf562-a2eb-4263-bbce-c65013fd645f"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.041575 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ea8ef6d-a9c6-477e-89be-fa625c176d42-utilities" (OuterVolumeSpecName: "utilities") pod "6ea8ef6d-a9c6-477e-89be-fa625c176d42" (UID: "6ea8ef6d-a9c6-477e-89be-fa625c176d42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.041915 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1f714a-f726-49f5-b611-3b0690ed7245-kube-api-access-296x8" (OuterVolumeSpecName: "kube-api-access-296x8") pod "de1f714a-f726-49f5-b611-3b0690ed7245" (UID: "de1f714a-f726-49f5-b611-3b0690ed7245"). InnerVolumeSpecName "kube-api-access-296x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.041933 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea8ef6d-a9c6-477e-89be-fa625c176d42-kube-api-access-sfmq6" (OuterVolumeSpecName: "kube-api-access-sfmq6") pod "6ea8ef6d-a9c6-477e-89be-fa625c176d42" (UID: "6ea8ef6d-a9c6-477e-89be-fa625c176d42"). InnerVolumeSpecName "kube-api-access-sfmq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.042125 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17edf562-a2eb-4263-bbce-c65013fd645f-kube-api-access-wmpm4" (OuterVolumeSpecName: "kube-api-access-wmpm4") pod "17edf562-a2eb-4263-bbce-c65013fd645f" (UID: "17edf562-a2eb-4263-bbce-c65013fd645f"). InnerVolumeSpecName "kube-api-access-wmpm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.044075 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-kube-api-access-mgxdr" (OuterVolumeSpecName: "kube-api-access-mgxdr") pod "736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2" (UID: "736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2"). InnerVolumeSpecName "kube-api-access-mgxdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.047522 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-utilities" (OuterVolumeSpecName: "utilities") pod "736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2" (UID: "736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.059324 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1f714a-f726-49f5-b611-3b0690ed7245-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de1f714a-f726-49f5-b611-3b0690ed7245" (UID: "de1f714a-f726-49f5-b611-3b0690ed7245"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.110204 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ea8ef6d-a9c6-477e-89be-fa625c176d42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ea8ef6d-a9c6-477e-89be-fa625c176d42" (UID: "6ea8ef6d-a9c6-477e-89be-fa625c176d42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.133968 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-296x8\" (UniqueName: \"kubernetes.io/projected/de1f714a-f726-49f5-b611-3b0690ed7245-kube-api-access-296x8\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.134011 4680 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17edf562-a2eb-4263-bbce-c65013fd645f-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.134021 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ea8ef6d-a9c6-477e-89be-fa625c176d42-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.134032 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmpm4\" (UniqueName: \"kubernetes.io/projected/17edf562-a2eb-4263-bbce-c65013fd645f-kube-api-access-wmpm4\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.134040 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.134049 4680 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17edf562-a2eb-4263-bbce-c65013fd645f-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.134057 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgxdr\" (UniqueName: \"kubernetes.io/projected/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-kube-api-access-mgxdr\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.134066 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfmq6\" (UniqueName: \"kubernetes.io/projected/6ea8ef6d-a9c6-477e-89be-fa625c176d42-kube-api-access-sfmq6\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.134076 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ea8ef6d-a9c6-477e-89be-fa625c176d42-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.134084 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1f714a-f726-49f5-b611-3b0690ed7245-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.134093 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1f714a-f726-49f5-b611-3b0690ed7245-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.150531 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.166809 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.177018 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2" (UID: "736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.234872 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.236614 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.245781 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.248068 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.512193 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdnhz" event={"ID":"a08188de-38b5-4e41-b12d-cf5a1cf488ec","Type":"ContainerDied","Data":"034e3679b202f5e0adf105de050cb91a999c9e000072ea76f0ff320d22444eba"} Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.512489 4680 scope.go:117] "RemoveContainer" containerID="52b5f59888665260200b06dfd5b0a8020169e76df420785a5dc5a0dc18bc0d62" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.512960 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdnhz" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.517788 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdls7" event={"ID":"736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2","Type":"ContainerDied","Data":"8cf9818629dd52a77c9ba3711a21f0065b2a721c338e64ce90423198b35167ab"} Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.517870 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pdls7" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.522391 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" event={"ID":"17edf562-a2eb-4263-bbce-c65013fd645f","Type":"ContainerDied","Data":"6a5f4b31efe4ec4712d358fdf1423a8e9782edb6ce9111e8085841f5ece4d64b"} Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.522456 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rjbs8" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.527949 4680 scope.go:117] "RemoveContainer" containerID="329e12520898e331c29c9e44b308f7992ccbd1f80dd8b2b25936502ab70b67c9" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.531783 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkz99" event={"ID":"de1f714a-f726-49f5-b611-3b0690ed7245","Type":"ContainerDied","Data":"9503389bb276854d5f5c1fe94188f23fe4913c315b92169ae937130e1e6ffab8"} Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.531831 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkz99" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.535499 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hgx6" event={"ID":"6ea8ef6d-a9c6-477e-89be-fa625c176d42","Type":"ContainerDied","Data":"e16604bbea75263178916f5f7a49b78c8fb31169a748d325043621c28f2846a1"} Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.535587 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hgx6" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.583818 4680 scope.go:117] "RemoveContainer" containerID="081ccfa5a385b379a43eb624b861f84d5d9e4a9c6531970f7a7747a93bc4326e" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.584695 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qdnhz"] Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.595524 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qdnhz"] Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.596493 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.602090 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.618936 4680 scope.go:117] "RemoveContainer" containerID="4f639cdd2095a72a6ca92aed5a93b145ffa085c045290fda00ea10e25a4ac225" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.647982 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkz99"] Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.652409 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkz99"] Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.652587 4680 scope.go:117] "RemoveContainer" containerID="fc402ed79bce9a7192bcdb7048d26aec458a57d6297aecaa252ee684f7cef196" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.663766 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rjbs8"] Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.667165 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rjbs8"] Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.678574 4680 scope.go:117] "RemoveContainer" containerID="25093738e65e7ec1c070b998df9d923bc3cd3fb497457a30bac3528df15b0e69" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.682373 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pdls7"] Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.686313 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pdls7"] Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.688980 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6hgx6"] Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.691512 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6hgx6"] Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.697890 4680 scope.go:117] "RemoveContainer" containerID="2c5b8edbc9581855f5f2f47cd62168294d0896b869a323bbf844265467f2b9ee" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.710793 4680 scope.go:117] "RemoveContainer" containerID="b6b777c84136f51c7b5d52ffffe2fb82e6ba8e3edde05c570308a0ceb0ad470c" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.726310 4680 scope.go:117] "RemoveContainer" containerID="68bf59842990d42a93f437aa9dfddbc1e8b7364d15250b15c76fdb2bdfe0461b" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.739532 4680 scope.go:117] "RemoveContainer" containerID="258a2ce3dc22aab310530bff1446de5aa35aead1a833cfc68dc76b4396b204bd" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.752204 4680 scope.go:117] "RemoveContainer" containerID="54c38411af0a2d57b4d25f55728f6a49700d1bad0f59388cbee84440b77634d2" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.755375 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.767468 4680 scope.go:117] "RemoveContainer" containerID="c5c403d76b186fc306294c351f57dc4ce333617204e44953e5af23ccf697f098" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.783503 4680 scope.go:117] "RemoveContainer" containerID="77af9eaa246cecab5446f790c274287c0c7b12e179c4f2b4a08a3d9d4c5ed7b3" Feb 17 17:48:48 crc kubenswrapper[4680]: I0217 17:48:48.998864 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-slcqr"] Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.015889 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.037904 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.058600 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.114761 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17edf562-a2eb-4263-bbce-c65013fd645f" path="/var/lib/kubelet/pods/17edf562-a2eb-4263-bbce-c65013fd645f/volumes" Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.115444 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea8ef6d-a9c6-477e-89be-fa625c176d42" path="/var/lib/kubelet/pods/6ea8ef6d-a9c6-477e-89be-fa625c176d42/volumes" Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.117248 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2" path="/var/lib/kubelet/pods/736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2/volumes" Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.119301 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a08188de-38b5-4e41-b12d-cf5a1cf488ec" path="/var/lib/kubelet/pods/a08188de-38b5-4e41-b12d-cf5a1cf488ec/volumes" Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.121845 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1f714a-f726-49f5-b611-3b0690ed7245" path="/var/lib/kubelet/pods/de1f714a-f726-49f5-b611-3b0690ed7245/volumes" Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.234477 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-slcqr"] Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.544743 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.544986 4680 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="459e9c94246b4e3258c1f64f6bc4908a463692d511a1e69a3ba3ad59411439af" exitCode=137 Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.550538 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-slcqr" event={"ID":"b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a","Type":"ContainerStarted","Data":"b578508618f14437ee15595bfe78d97355acc4aeb0b95a52c2a511629b18faf6"} Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.550578 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-slcqr" event={"ID":"b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a","Type":"ContainerStarted","Data":"65d8b8182b5bed53caad056f25bc84a13de41bb7488edb2fbe1b6618c0a2c83e"} Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.550846 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-slcqr" Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.553098 4680 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-slcqr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.553154 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-slcqr" podUID="b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.570201 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-slcqr" podStartSLOduration=2.570182048 podStartE2EDuration="2.570182048s" podCreationTimestamp="2026-02-17 17:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:48:49.568417904 +0000 UTC m=+259.119316583" watchObservedRunningTime="2026-02-17 17:48:49.570182048 +0000 UTC m=+259.121080727" Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.676667 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.790923 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.807035 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 17:48:49 crc kubenswrapper[4680]: I0217 17:48:49.896036 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.032241 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.038114 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.038189 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.059173 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.059216 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.059243 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.059279 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.059295 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.059310 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.059299 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.059376 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.059400 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.059827 4680 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.059856 4680 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.059868 4680 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.059885 4680 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.076987 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.160557 4680 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.184458 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.369639 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.440185 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.557236 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.557365 4680 scope.go:117] "RemoveContainer" containerID="459e9c94246b4e3258c1f64f6bc4908a463692d511a1e69a3ba3ad59411439af" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.557369 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.573576 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-slcqr" Feb 17 17:48:50 crc kubenswrapper[4680]: I0217 17:48:50.931633 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 17:48:51 crc kubenswrapper[4680]: I0217 17:48:51.114151 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 17 17:48:51 crc kubenswrapper[4680]: I0217 17:48:51.114403 4680 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 17 17:48:51 crc kubenswrapper[4680]: I0217 17:48:51.124002 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 17:48:51 crc kubenswrapper[4680]: I0217 17:48:51.124037 4680 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="86ff1fa5-b3bb-4e34-bfdb-d5d8bf883629" Feb 17 17:48:51 crc kubenswrapper[4680]: I0217 17:48:51.127599 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 17:48:51 crc kubenswrapper[4680]: I0217 17:48:51.127644 4680 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="86ff1fa5-b3bb-4e34-bfdb-d5d8bf883629" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.069413 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5zzk5"] Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.070172 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" podUID="8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa" containerName="controller-manager" containerID="cri-o://070ef90fbf6014111161b992bc77e566c8a69a06f8055bee3364af8afb55f2c0" gracePeriod=30 Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.166518 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk"] Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.166760 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" podUID="14cc3153-fc79-4a60-b371-eae93f02ba78" containerName="route-controller-manager" containerID="cri-o://649b37816d626159c03aae5f0e7576238dda5dc1000f794ab365b5df323cc628" gracePeriod=30 Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.579538 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.643550 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.686456 4680 generic.go:334] "Generic (PLEG): container finished" podID="8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa" containerID="070ef90fbf6014111161b992bc77e566c8a69a06f8055bee3364af8afb55f2c0" exitCode=0 Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.686511 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" event={"ID":"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa","Type":"ContainerDied","Data":"070ef90fbf6014111161b992bc77e566c8a69a06f8055bee3364af8afb55f2c0"} Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.686535 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" event={"ID":"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa","Type":"ContainerDied","Data":"5907717f6a04064101d401212c74cd791a881895237373d12f20cc585f0e1ba4"} Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.686551 4680 scope.go:117] "RemoveContainer" containerID="070ef90fbf6014111161b992bc77e566c8a69a06f8055bee3364af8afb55f2c0" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.686641 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5zzk5" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.688258 4680 generic.go:334] "Generic (PLEG): container finished" podID="14cc3153-fc79-4a60-b371-eae93f02ba78" containerID="649b37816d626159c03aae5f0e7576238dda5dc1000f794ab365b5df323cc628" exitCode=0 Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.688283 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" event={"ID":"14cc3153-fc79-4a60-b371-eae93f02ba78","Type":"ContainerDied","Data":"649b37816d626159c03aae5f0e7576238dda5dc1000f794ab365b5df323cc628"} Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.688297 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" event={"ID":"14cc3153-fc79-4a60-b371-eae93f02ba78","Type":"ContainerDied","Data":"f99bc5bbb47c83c95112ef888c2608078335cb30dc07e79565f7b00330018946"} Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.688352 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.703675 4680 scope.go:117] "RemoveContainer" containerID="070ef90fbf6014111161b992bc77e566c8a69a06f8055bee3364af8afb55f2c0" Feb 17 17:49:14 crc kubenswrapper[4680]: E0217 17:49:14.704180 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"070ef90fbf6014111161b992bc77e566c8a69a06f8055bee3364af8afb55f2c0\": container with ID starting with 070ef90fbf6014111161b992bc77e566c8a69a06f8055bee3364af8afb55f2c0 not found: ID does not exist" containerID="070ef90fbf6014111161b992bc77e566c8a69a06f8055bee3364af8afb55f2c0" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.704213 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"070ef90fbf6014111161b992bc77e566c8a69a06f8055bee3364af8afb55f2c0"} err="failed to get container status \"070ef90fbf6014111161b992bc77e566c8a69a06f8055bee3364af8afb55f2c0\": rpc error: code = NotFound desc = could not find container \"070ef90fbf6014111161b992bc77e566c8a69a06f8055bee3364af8afb55f2c0\": container with ID starting with 070ef90fbf6014111161b992bc77e566c8a69a06f8055bee3364af8afb55f2c0 not found: ID does not exist" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.704236 4680 scope.go:117] "RemoveContainer" containerID="649b37816d626159c03aae5f0e7576238dda5dc1000f794ab365b5df323cc628" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.719255 4680 scope.go:117] "RemoveContainer" containerID="649b37816d626159c03aae5f0e7576238dda5dc1000f794ab365b5df323cc628" Feb 17 17:49:14 crc kubenswrapper[4680]: E0217 17:49:14.719777 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"649b37816d626159c03aae5f0e7576238dda5dc1000f794ab365b5df323cc628\": container with ID starting with 649b37816d626159c03aae5f0e7576238dda5dc1000f794ab365b5df323cc628 not found: ID does not exist" containerID="649b37816d626159c03aae5f0e7576238dda5dc1000f794ab365b5df323cc628" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.719808 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"649b37816d626159c03aae5f0e7576238dda5dc1000f794ab365b5df323cc628"} err="failed to get container status \"649b37816d626159c03aae5f0e7576238dda5dc1000f794ab365b5df323cc628\": rpc error: code = NotFound desc = could not find container \"649b37816d626159c03aae5f0e7576238dda5dc1000f794ab365b5df323cc628\": container with ID starting with 649b37816d626159c03aae5f0e7576238dda5dc1000f794ab365b5df323cc628 not found: ID does not exist" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.746411 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-serving-cert\") pod \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.746460 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14cc3153-fc79-4a60-b371-eae93f02ba78-serving-cert\") pod \"14cc3153-fc79-4a60-b371-eae93f02ba78\" (UID: \"14cc3153-fc79-4a60-b371-eae93f02ba78\") " Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.746486 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szhmv\" (UniqueName: \"kubernetes.io/projected/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-kube-api-access-szhmv\") pod \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.746511 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbwgj\" (UniqueName: \"kubernetes.io/projected/14cc3153-fc79-4a60-b371-eae93f02ba78-kube-api-access-jbwgj\") pod \"14cc3153-fc79-4a60-b371-eae93f02ba78\" (UID: \"14cc3153-fc79-4a60-b371-eae93f02ba78\") " Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.746530 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14cc3153-fc79-4a60-b371-eae93f02ba78-config\") pod \"14cc3153-fc79-4a60-b371-eae93f02ba78\" (UID: \"14cc3153-fc79-4a60-b371-eae93f02ba78\") " Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.746583 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-proxy-ca-bundles\") pod \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.746621 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-client-ca\") pod \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.746651 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-config\") pod \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\" (UID: \"8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa\") " Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.746682 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14cc3153-fc79-4a60-b371-eae93f02ba78-client-ca\") pod \"14cc3153-fc79-4a60-b371-eae93f02ba78\" (UID: \"14cc3153-fc79-4a60-b371-eae93f02ba78\") " Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.747423 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14cc3153-fc79-4a60-b371-eae93f02ba78-client-ca" (OuterVolumeSpecName: "client-ca") pod "14cc3153-fc79-4a60-b371-eae93f02ba78" (UID: "14cc3153-fc79-4a60-b371-eae93f02ba78"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.747973 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa" (UID: "8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.748013 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-client-ca" (OuterVolumeSpecName: "client-ca") pod "8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa" (UID: "8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.748568 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-config" (OuterVolumeSpecName: "config") pod "8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa" (UID: "8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.748640 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14cc3153-fc79-4a60-b371-eae93f02ba78-config" (OuterVolumeSpecName: "config") pod "14cc3153-fc79-4a60-b371-eae93f02ba78" (UID: "14cc3153-fc79-4a60-b371-eae93f02ba78"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.754243 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa" (UID: "8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.754253 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14cc3153-fc79-4a60-b371-eae93f02ba78-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "14cc3153-fc79-4a60-b371-eae93f02ba78" (UID: "14cc3153-fc79-4a60-b371-eae93f02ba78"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.754287 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14cc3153-fc79-4a60-b371-eae93f02ba78-kube-api-access-jbwgj" (OuterVolumeSpecName: "kube-api-access-jbwgj") pod "14cc3153-fc79-4a60-b371-eae93f02ba78" (UID: "14cc3153-fc79-4a60-b371-eae93f02ba78"). InnerVolumeSpecName "kube-api-access-jbwgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.754429 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-kube-api-access-szhmv" (OuterVolumeSpecName: "kube-api-access-szhmv") pod "8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa" (UID: "8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa"). InnerVolumeSpecName "kube-api-access-szhmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.847652 4680 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.847722 4680 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.847742 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.847751 4680 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14cc3153-fc79-4a60-b371-eae93f02ba78-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.847759 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.847767 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14cc3153-fc79-4a60-b371-eae93f02ba78-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.847777 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szhmv\" (UniqueName: \"kubernetes.io/projected/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa-kube-api-access-szhmv\") on node \"crc\" DevicePath \"\"" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.847787 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbwgj\" (UniqueName: \"kubernetes.io/projected/14cc3153-fc79-4a60-b371-eae93f02ba78-kube-api-access-jbwgj\") on node \"crc\" DevicePath \"\"" Feb 17 17:49:14 crc kubenswrapper[4680]: I0217 17:49:14.847797 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14cc3153-fc79-4a60-b371-eae93f02ba78-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.017488 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5zzk5"] Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.021181 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5zzk5"] Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.028857 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk"] Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.032238 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kb4mk"] Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.816344 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14cc3153-fc79-4a60-b371-eae93f02ba78" path="/var/lib/kubelet/pods/14cc3153-fc79-4a60-b371-eae93f02ba78/volumes" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.816893 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa" path="/var/lib/kubelet/pods/8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa/volumes" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817379 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-tsqms"] Feb 17 17:49:15 crc kubenswrapper[4680]: E0217 17:49:15.817551 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17edf562-a2eb-4263-bbce-c65013fd645f" containerName="marketplace-operator" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817566 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="17edf562-a2eb-4263-bbce-c65013fd645f" containerName="marketplace-operator" Feb 17 17:49:15 crc kubenswrapper[4680]: E0217 17:49:15.817576 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2" containerName="extract-content" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817584 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2" containerName="extract-content" Feb 17 17:49:15 crc kubenswrapper[4680]: E0217 17:49:15.817592 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1f714a-f726-49f5-b611-3b0690ed7245" containerName="extract-content" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817598 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1f714a-f726-49f5-b611-3b0690ed7245" containerName="extract-content" Feb 17 17:49:15 crc kubenswrapper[4680]: E0217 17:49:15.817607 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08188de-38b5-4e41-b12d-cf5a1cf488ec" containerName="extract-content" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817613 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08188de-38b5-4e41-b12d-cf5a1cf488ec" containerName="extract-content" Feb 17 17:49:15 crc kubenswrapper[4680]: E0217 17:49:15.817620 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa" containerName="controller-manager" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817626 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa" containerName="controller-manager" Feb 17 17:49:15 crc kubenswrapper[4680]: E0217 17:49:15.817636 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1f714a-f726-49f5-b611-3b0690ed7245" containerName="extract-utilities" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817642 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1f714a-f726-49f5-b611-3b0690ed7245" containerName="extract-utilities" Feb 17 17:49:15 crc kubenswrapper[4680]: E0217 17:49:15.817650 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ea8ef6d-a9c6-477e-89be-fa625c176d42" containerName="extract-content" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817656 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ea8ef6d-a9c6-477e-89be-fa625c176d42" containerName="extract-content" Feb 17 17:49:15 crc kubenswrapper[4680]: E0217 17:49:15.817664 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08188de-38b5-4e41-b12d-cf5a1cf488ec" containerName="registry-server" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817696 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08188de-38b5-4e41-b12d-cf5a1cf488ec" containerName="registry-server" Feb 17 17:49:15 crc kubenswrapper[4680]: E0217 17:49:15.817706 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ea8ef6d-a9c6-477e-89be-fa625c176d42" containerName="registry-server" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817713 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ea8ef6d-a9c6-477e-89be-fa625c176d42" containerName="registry-server" Feb 17 17:49:15 crc kubenswrapper[4680]: E0217 17:49:15.817721 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2" containerName="extract-utilities" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817727 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2" containerName="extract-utilities" Feb 17 17:49:15 crc kubenswrapper[4680]: E0217 17:49:15.817740 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2" containerName="registry-server" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817746 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2" containerName="registry-server" Feb 17 17:49:15 crc kubenswrapper[4680]: E0217 17:49:15.817756 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1f714a-f726-49f5-b611-3b0690ed7245" containerName="registry-server" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817762 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1f714a-f726-49f5-b611-3b0690ed7245" containerName="registry-server" Feb 17 17:49:15 crc kubenswrapper[4680]: E0217 17:49:15.817773 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14cc3153-fc79-4a60-b371-eae93f02ba78" containerName="route-controller-manager" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817779 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="14cc3153-fc79-4a60-b371-eae93f02ba78" containerName="route-controller-manager" Feb 17 17:49:15 crc kubenswrapper[4680]: E0217 17:49:15.817788 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ea8ef6d-a9c6-477e-89be-fa625c176d42" containerName="extract-utilities" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817794 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ea8ef6d-a9c6-477e-89be-fa625c176d42" containerName="extract-utilities" Feb 17 17:49:15 crc kubenswrapper[4680]: E0217 17:49:15.817805 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08188de-38b5-4e41-b12d-cf5a1cf488ec" containerName="extract-utilities" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817812 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08188de-38b5-4e41-b12d-cf5a1cf488ec" containerName="extract-utilities" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817910 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1f714a-f726-49f5-b611-3b0690ed7245" containerName="registry-server" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817920 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="14cc3153-fc79-4a60-b371-eae93f02ba78" containerName="route-controller-manager" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817927 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="736e2e4a-db6c-44ba-b0ab-aa1cc0a42cf2" containerName="registry-server" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817937 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="a08188de-38b5-4e41-b12d-cf5a1cf488ec" containerName="registry-server" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817946 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="17edf562-a2eb-4263-bbce-c65013fd645f" containerName="marketplace-operator" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817954 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aec93fe-9fdb-48de-b4b1-0fb0c2612bfa" containerName="controller-manager" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.817961 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ea8ef6d-a9c6-477e-89be-fa625c176d42" containerName="registry-server" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.818245 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-tsqms"] Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.818670 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.824105 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.824550 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.848098 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.848481 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.848693 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.851637 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.867349 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.917297 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-config\") pod \"controller-manager-86489848d5-tsqms\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.917593 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h4c7\" (UniqueName: \"kubernetes.io/projected/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-kube-api-access-4h4c7\") pod \"controller-manager-86489848d5-tsqms\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.917730 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-proxy-ca-bundles\") pod \"controller-manager-86489848d5-tsqms\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.917865 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-serving-cert\") pod \"controller-manager-86489848d5-tsqms\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:15 crc kubenswrapper[4680]: I0217 17:49:15.917959 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-client-ca\") pod \"controller-manager-86489848d5-tsqms\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.019060 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-client-ca\") pod \"controller-manager-86489848d5-tsqms\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.019134 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-config\") pod \"controller-manager-86489848d5-tsqms\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.019156 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h4c7\" (UniqueName: \"kubernetes.io/projected/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-kube-api-access-4h4c7\") pod \"controller-manager-86489848d5-tsqms\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.019177 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-proxy-ca-bundles\") pod \"controller-manager-86489848d5-tsqms\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.019236 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-serving-cert\") pod \"controller-manager-86489848d5-tsqms\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.021287 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-client-ca\") pod \"controller-manager-86489848d5-tsqms\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.021888 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-proxy-ca-bundles\") pod \"controller-manager-86489848d5-tsqms\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.022487 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-config\") pod \"controller-manager-86489848d5-tsqms\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.025039 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-serving-cert\") pod \"controller-manager-86489848d5-tsqms\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.206515 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h4c7\" (UniqueName: \"kubernetes.io/projected/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-kube-api-access-4h4c7\") pod \"controller-manager-86489848d5-tsqms\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.259163 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.460805 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c"] Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.461618 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.464058 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.464617 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.464679 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.464745 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.472573 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c"] Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.472725 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.473372 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.524462 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcgzr\" (UniqueName: \"kubernetes.io/projected/59cc51cb-694e-4e18-8c20-b0f8594d26d4-kube-api-access-gcgzr\") pod \"route-controller-manager-85fb6447fc-zrh2c\" (UID: \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.524554 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59cc51cb-694e-4e18-8c20-b0f8594d26d4-config\") pod \"route-controller-manager-85fb6447fc-zrh2c\" (UID: \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.524585 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59cc51cb-694e-4e18-8c20-b0f8594d26d4-serving-cert\") pod \"route-controller-manager-85fb6447fc-zrh2c\" (UID: \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.524606 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59cc51cb-694e-4e18-8c20-b0f8594d26d4-client-ca\") pod \"route-controller-manager-85fb6447fc-zrh2c\" (UID: \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.626454 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59cc51cb-694e-4e18-8c20-b0f8594d26d4-client-ca\") pod \"route-controller-manager-85fb6447fc-zrh2c\" (UID: \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.626618 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcgzr\" (UniqueName: \"kubernetes.io/projected/59cc51cb-694e-4e18-8c20-b0f8594d26d4-kube-api-access-gcgzr\") pod \"route-controller-manager-85fb6447fc-zrh2c\" (UID: \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.626665 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59cc51cb-694e-4e18-8c20-b0f8594d26d4-config\") pod \"route-controller-manager-85fb6447fc-zrh2c\" (UID: \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.626712 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59cc51cb-694e-4e18-8c20-b0f8594d26d4-serving-cert\") pod \"route-controller-manager-85fb6447fc-zrh2c\" (UID: \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.627970 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59cc51cb-694e-4e18-8c20-b0f8594d26d4-config\") pod \"route-controller-manager-85fb6447fc-zrh2c\" (UID: \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.628471 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59cc51cb-694e-4e18-8c20-b0f8594d26d4-client-ca\") pod \"route-controller-manager-85fb6447fc-zrh2c\" (UID: \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.632897 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59cc51cb-694e-4e18-8c20-b0f8594d26d4-serving-cert\") pod \"route-controller-manager-85fb6447fc-zrh2c\" (UID: \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.642946 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcgzr\" (UniqueName: \"kubernetes.io/projected/59cc51cb-694e-4e18-8c20-b0f8594d26d4-kube-api-access-gcgzr\") pod \"route-controller-manager-85fb6447fc-zrh2c\" (UID: \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.674209 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-tsqms"] Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.793848 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.869636 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" event={"ID":"fd252884-e940-4f2e-a8ca-2b1e2c5b721e","Type":"ContainerStarted","Data":"9f66eed03797c51563fc6bea66d986adb604cf7892726706e4feef888a54bcd4"} Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.869677 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" event={"ID":"fd252884-e940-4f2e-a8ca-2b1e2c5b721e","Type":"ContainerStarted","Data":"9eeb3ab63a920f4cad1f264079a8bf78bfc9b93884cc585cc1293ba2296d7fb4"} Feb 17 17:49:16 crc kubenswrapper[4680]: I0217 17:49:16.972510 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c"] Feb 17 17:49:16 crc kubenswrapper[4680]: W0217 17:49:16.979918 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59cc51cb_694e_4e18_8c20_b0f8594d26d4.slice/crio-764132271a8cba2b772c24157e2cc5a949d2646da15c4b4f72f38d7f153da432 WatchSource:0}: Error finding container 764132271a8cba2b772c24157e2cc5a949d2646da15c4b4f72f38d7f153da432: Status 404 returned error can't find the container with id 764132271a8cba2b772c24157e2cc5a949d2646da15c4b4f72f38d7f153da432 Feb 17 17:49:17 crc kubenswrapper[4680]: I0217 17:49:17.875987 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" event={"ID":"59cc51cb-694e-4e18-8c20-b0f8594d26d4","Type":"ContainerStarted","Data":"f95706ea2a22b4a67670aaf994e19a18aad989fdf831820dfa8f5509fccef4c8"} Feb 17 17:49:17 crc kubenswrapper[4680]: I0217 17:49:17.876275 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" event={"ID":"59cc51cb-694e-4e18-8c20-b0f8594d26d4","Type":"ContainerStarted","Data":"764132271a8cba2b772c24157e2cc5a949d2646da15c4b4f72f38d7f153da432"} Feb 17 17:49:17 crc kubenswrapper[4680]: I0217 17:49:17.876297 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:17 crc kubenswrapper[4680]: I0217 17:49:17.876328 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:49:17 crc kubenswrapper[4680]: I0217 17:49:17.881204 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:49:17 crc kubenswrapper[4680]: I0217 17:49:17.890554 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" podStartSLOduration=3.89053697 podStartE2EDuration="3.89053697s" podCreationTimestamp="2026-02-17 17:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:49:17.887436404 +0000 UTC m=+287.438335083" watchObservedRunningTime="2026-02-17 17:49:17.89053697 +0000 UTC m=+287.441435649" Feb 17 17:49:17 crc kubenswrapper[4680]: I0217 17:49:17.920044 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" podStartSLOduration=3.920025693 podStartE2EDuration="3.920025693s" podCreationTimestamp="2026-02-17 17:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:49:17.916310198 +0000 UTC m=+287.467208947" watchObservedRunningTime="2026-02-17 17:49:17.920025693 +0000 UTC m=+287.470924372" Feb 17 17:49:18 crc kubenswrapper[4680]: I0217 17:49:18.136287 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:49:30 crc kubenswrapper[4680]: I0217 17:49:30.909680 4680 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.553884 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6lssp"] Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.555454 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6lssp" Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.558034 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.563660 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6lssp"] Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.680266 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2660862d-dbb8-465e-a47d-91dab06d81cb-catalog-content\") pod \"redhat-marketplace-6lssp\" (UID: \"2660862d-dbb8-465e-a47d-91dab06d81cb\") " pod="openshift-marketplace/redhat-marketplace-6lssp" Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.680344 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2660862d-dbb8-465e-a47d-91dab06d81cb-utilities\") pod \"redhat-marketplace-6lssp\" (UID: \"2660862d-dbb8-465e-a47d-91dab06d81cb\") " pod="openshift-marketplace/redhat-marketplace-6lssp" Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.680367 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgjx8\" (UniqueName: \"kubernetes.io/projected/2660862d-dbb8-465e-a47d-91dab06d81cb-kube-api-access-jgjx8\") pod \"redhat-marketplace-6lssp\" (UID: \"2660862d-dbb8-465e-a47d-91dab06d81cb\") " pod="openshift-marketplace/redhat-marketplace-6lssp" Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.747531 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cc5x2"] Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.749596 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc5x2" Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.760780 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.763333 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cc5x2"] Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.808801 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2660862d-dbb8-465e-a47d-91dab06d81cb-catalog-content\") pod \"redhat-marketplace-6lssp\" (UID: \"2660862d-dbb8-465e-a47d-91dab06d81cb\") " pod="openshift-marketplace/redhat-marketplace-6lssp" Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.808864 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2660862d-dbb8-465e-a47d-91dab06d81cb-utilities\") pod \"redhat-marketplace-6lssp\" (UID: \"2660862d-dbb8-465e-a47d-91dab06d81cb\") " pod="openshift-marketplace/redhat-marketplace-6lssp" Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.808892 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgjx8\" (UniqueName: \"kubernetes.io/projected/2660862d-dbb8-465e-a47d-91dab06d81cb-kube-api-access-jgjx8\") pod \"redhat-marketplace-6lssp\" (UID: \"2660862d-dbb8-465e-a47d-91dab06d81cb\") " pod="openshift-marketplace/redhat-marketplace-6lssp" Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.809712 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2660862d-dbb8-465e-a47d-91dab06d81cb-catalog-content\") pod \"redhat-marketplace-6lssp\" (UID: \"2660862d-dbb8-465e-a47d-91dab06d81cb\") " pod="openshift-marketplace/redhat-marketplace-6lssp" Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.810064 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2660862d-dbb8-465e-a47d-91dab06d81cb-utilities\") pod \"redhat-marketplace-6lssp\" (UID: \"2660862d-dbb8-465e-a47d-91dab06d81cb\") " pod="openshift-marketplace/redhat-marketplace-6lssp" Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.836270 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgjx8\" (UniqueName: \"kubernetes.io/projected/2660862d-dbb8-465e-a47d-91dab06d81cb-kube-api-access-jgjx8\") pod \"redhat-marketplace-6lssp\" (UID: \"2660862d-dbb8-465e-a47d-91dab06d81cb\") " pod="openshift-marketplace/redhat-marketplace-6lssp" Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.871383 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6lssp" Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.909995 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36277038-c2ae-486f-aa6d-ac179eb779a5-catalog-content\") pod \"redhat-operators-cc5x2\" (UID: \"36277038-c2ae-486f-aa6d-ac179eb779a5\") " pod="openshift-marketplace/redhat-operators-cc5x2" Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.910283 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfjp4\" (UniqueName: \"kubernetes.io/projected/36277038-c2ae-486f-aa6d-ac179eb779a5-kube-api-access-bfjp4\") pod \"redhat-operators-cc5x2\" (UID: \"36277038-c2ae-486f-aa6d-ac179eb779a5\") " pod="openshift-marketplace/redhat-operators-cc5x2" Feb 17 17:50:04 crc kubenswrapper[4680]: I0217 17:50:04.910414 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36277038-c2ae-486f-aa6d-ac179eb779a5-utilities\") pod \"redhat-operators-cc5x2\" (UID: \"36277038-c2ae-486f-aa6d-ac179eb779a5\") " pod="openshift-marketplace/redhat-operators-cc5x2" Feb 17 17:50:05 crc kubenswrapper[4680]: I0217 17:50:05.011672 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36277038-c2ae-486f-aa6d-ac179eb779a5-catalog-content\") pod \"redhat-operators-cc5x2\" (UID: \"36277038-c2ae-486f-aa6d-ac179eb779a5\") " pod="openshift-marketplace/redhat-operators-cc5x2" Feb 17 17:50:05 crc kubenswrapper[4680]: I0217 17:50:05.011768 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfjp4\" (UniqueName: \"kubernetes.io/projected/36277038-c2ae-486f-aa6d-ac179eb779a5-kube-api-access-bfjp4\") pod \"redhat-operators-cc5x2\" (UID: \"36277038-c2ae-486f-aa6d-ac179eb779a5\") " pod="openshift-marketplace/redhat-operators-cc5x2" Feb 17 17:50:05 crc kubenswrapper[4680]: I0217 17:50:05.011822 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36277038-c2ae-486f-aa6d-ac179eb779a5-utilities\") pod \"redhat-operators-cc5x2\" (UID: \"36277038-c2ae-486f-aa6d-ac179eb779a5\") " pod="openshift-marketplace/redhat-operators-cc5x2" Feb 17 17:50:05 crc kubenswrapper[4680]: I0217 17:50:05.012845 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36277038-c2ae-486f-aa6d-ac179eb779a5-utilities\") pod \"redhat-operators-cc5x2\" (UID: \"36277038-c2ae-486f-aa6d-ac179eb779a5\") " pod="openshift-marketplace/redhat-operators-cc5x2" Feb 17 17:50:05 crc kubenswrapper[4680]: I0217 17:50:05.013714 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36277038-c2ae-486f-aa6d-ac179eb779a5-catalog-content\") pod \"redhat-operators-cc5x2\" (UID: \"36277038-c2ae-486f-aa6d-ac179eb779a5\") " pod="openshift-marketplace/redhat-operators-cc5x2" Feb 17 17:50:05 crc kubenswrapper[4680]: I0217 17:50:05.037732 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfjp4\" (UniqueName: \"kubernetes.io/projected/36277038-c2ae-486f-aa6d-ac179eb779a5-kube-api-access-bfjp4\") pod \"redhat-operators-cc5x2\" (UID: \"36277038-c2ae-486f-aa6d-ac179eb779a5\") " pod="openshift-marketplace/redhat-operators-cc5x2" Feb 17 17:50:05 crc kubenswrapper[4680]: I0217 17:50:05.066293 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc5x2" Feb 17 17:50:05 crc kubenswrapper[4680]: I0217 17:50:05.085856 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6lssp"] Feb 17 17:50:05 crc kubenswrapper[4680]: W0217 17:50:05.100484 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2660862d_dbb8_465e_a47d_91dab06d81cb.slice/crio-ece946ac97c5c6a59744eb52ab9487eb366518ccd6b88be35e8513114e56104d WatchSource:0}: Error finding container ece946ac97c5c6a59744eb52ab9487eb366518ccd6b88be35e8513114e56104d: Status 404 returned error can't find the container with id ece946ac97c5c6a59744eb52ab9487eb366518ccd6b88be35e8513114e56104d Feb 17 17:50:05 crc kubenswrapper[4680]: I0217 17:50:05.118979 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lssp" event={"ID":"2660862d-dbb8-465e-a47d-91dab06d81cb","Type":"ContainerStarted","Data":"ece946ac97c5c6a59744eb52ab9487eb366518ccd6b88be35e8513114e56104d"} Feb 17 17:50:05 crc kubenswrapper[4680]: I0217 17:50:05.310121 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cc5x2"] Feb 17 17:50:05 crc kubenswrapper[4680]: W0217 17:50:05.322405 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36277038_c2ae_486f_aa6d_ac179eb779a5.slice/crio-2d8ad8c331ef7bfa92e3ca3d094680a28dedf307805eb282ca0dd3fb1d66043d WatchSource:0}: Error finding container 2d8ad8c331ef7bfa92e3ca3d094680a28dedf307805eb282ca0dd3fb1d66043d: Status 404 returned error can't find the container with id 2d8ad8c331ef7bfa92e3ca3d094680a28dedf307805eb282ca0dd3fb1d66043d Feb 17 17:50:05 crc kubenswrapper[4680]: I0217 17:50:05.589297 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:50:05 crc kubenswrapper[4680]: I0217 17:50:05.590147 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.124972 4680 generic.go:334] "Generic (PLEG): container finished" podID="2660862d-dbb8-465e-a47d-91dab06d81cb" containerID="47c8311483ce28cdaddde0ad31e35e21cfc3fa0c7f5b8f1c933c141c9d83a094" exitCode=0 Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.125068 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lssp" event={"ID":"2660862d-dbb8-465e-a47d-91dab06d81cb","Type":"ContainerDied","Data":"47c8311483ce28cdaddde0ad31e35e21cfc3fa0c7f5b8f1c933c141c9d83a094"} Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.126549 4680 generic.go:334] "Generic (PLEG): container finished" podID="36277038-c2ae-486f-aa6d-ac179eb779a5" containerID="d61af5841c4b2b7caee7ff51216d3bb496385beae48aa6627f25023360a7e336" exitCode=0 Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.126571 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc5x2" event={"ID":"36277038-c2ae-486f-aa6d-ac179eb779a5","Type":"ContainerDied","Data":"d61af5841c4b2b7caee7ff51216d3bb496385beae48aa6627f25023360a7e336"} Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.126586 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc5x2" event={"ID":"36277038-c2ae-486f-aa6d-ac179eb779a5","Type":"ContainerStarted","Data":"2d8ad8c331ef7bfa92e3ca3d094680a28dedf307805eb282ca0dd3fb1d66043d"} Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.346518 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b96s8"] Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.347512 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b96s8" Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.349473 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.357913 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b96s8"] Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.531584 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88dae8bc-f19a-4b03-9443-19af5340b54e-catalog-content\") pod \"certified-operators-b96s8\" (UID: \"88dae8bc-f19a-4b03-9443-19af5340b54e\") " pod="openshift-marketplace/certified-operators-b96s8" Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.531667 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88dae8bc-f19a-4b03-9443-19af5340b54e-utilities\") pod \"certified-operators-b96s8\" (UID: \"88dae8bc-f19a-4b03-9443-19af5340b54e\") " pod="openshift-marketplace/certified-operators-b96s8" Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.531712 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v6dg\" (UniqueName: \"kubernetes.io/projected/88dae8bc-f19a-4b03-9443-19af5340b54e-kube-api-access-4v6dg\") pod \"certified-operators-b96s8\" (UID: \"88dae8bc-f19a-4b03-9443-19af5340b54e\") " pod="openshift-marketplace/certified-operators-b96s8" Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.632957 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88dae8bc-f19a-4b03-9443-19af5340b54e-utilities\") pod \"certified-operators-b96s8\" (UID: \"88dae8bc-f19a-4b03-9443-19af5340b54e\") " pod="openshift-marketplace/certified-operators-b96s8" Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.633022 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v6dg\" (UniqueName: \"kubernetes.io/projected/88dae8bc-f19a-4b03-9443-19af5340b54e-kube-api-access-4v6dg\") pod \"certified-operators-b96s8\" (UID: \"88dae8bc-f19a-4b03-9443-19af5340b54e\") " pod="openshift-marketplace/certified-operators-b96s8" Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.633073 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88dae8bc-f19a-4b03-9443-19af5340b54e-catalog-content\") pod \"certified-operators-b96s8\" (UID: \"88dae8bc-f19a-4b03-9443-19af5340b54e\") " pod="openshift-marketplace/certified-operators-b96s8" Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.633508 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88dae8bc-f19a-4b03-9443-19af5340b54e-catalog-content\") pod \"certified-operators-b96s8\" (UID: \"88dae8bc-f19a-4b03-9443-19af5340b54e\") " pod="openshift-marketplace/certified-operators-b96s8" Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.633602 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88dae8bc-f19a-4b03-9443-19af5340b54e-utilities\") pod \"certified-operators-b96s8\" (UID: \"88dae8bc-f19a-4b03-9443-19af5340b54e\") " pod="openshift-marketplace/certified-operators-b96s8" Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.659375 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v6dg\" (UniqueName: \"kubernetes.io/projected/88dae8bc-f19a-4b03-9443-19af5340b54e-kube-api-access-4v6dg\") pod \"certified-operators-b96s8\" (UID: \"88dae8bc-f19a-4b03-9443-19af5340b54e\") " pod="openshift-marketplace/certified-operators-b96s8" Feb 17 17:50:06 crc kubenswrapper[4680]: I0217 17:50:06.665944 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b96s8" Feb 17 17:50:07 crc kubenswrapper[4680]: I0217 17:50:07.053052 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b96s8"] Feb 17 17:50:07 crc kubenswrapper[4680]: I0217 17:50:07.138935 4680 generic.go:334] "Generic (PLEG): container finished" podID="2660862d-dbb8-465e-a47d-91dab06d81cb" containerID="0eb9c84e055179635dcb844c100c0bd7c444495e445b386432e7311388543eb9" exitCode=0 Feb 17 17:50:07 crc kubenswrapper[4680]: I0217 17:50:07.138998 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lssp" event={"ID":"2660862d-dbb8-465e-a47d-91dab06d81cb","Type":"ContainerDied","Data":"0eb9c84e055179635dcb844c100c0bd7c444495e445b386432e7311388543eb9"} Feb 17 17:50:07 crc kubenswrapper[4680]: I0217 17:50:07.144655 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b96s8" event={"ID":"88dae8bc-f19a-4b03-9443-19af5340b54e","Type":"ContainerStarted","Data":"6a0f546a27735c8be14dda22dc8ce7898189d60f4fd980b7e30402ed98e3d08d"} Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:07.801568 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-px9ln"] Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:07.802943 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-px9ln" Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:07.805275 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:07.806511 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-px9ln"] Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:07.899203 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bb4764c-4bf2-46c1-a940-0aa75a205881-utilities\") pod \"community-operators-px9ln\" (UID: \"3bb4764c-4bf2-46c1-a940-0aa75a205881\") " pod="openshift-marketplace/community-operators-px9ln" Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:07.899304 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bb4764c-4bf2-46c1-a940-0aa75a205881-catalog-content\") pod \"community-operators-px9ln\" (UID: \"3bb4764c-4bf2-46c1-a940-0aa75a205881\") " pod="openshift-marketplace/community-operators-px9ln" Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:07.899464 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5bn7\" (UniqueName: \"kubernetes.io/projected/3bb4764c-4bf2-46c1-a940-0aa75a205881-kube-api-access-k5bn7\") pod \"community-operators-px9ln\" (UID: \"3bb4764c-4bf2-46c1-a940-0aa75a205881\") " pod="openshift-marketplace/community-operators-px9ln" Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:08.000777 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bb4764c-4bf2-46c1-a940-0aa75a205881-utilities\") pod \"community-operators-px9ln\" (UID: \"3bb4764c-4bf2-46c1-a940-0aa75a205881\") " pod="openshift-marketplace/community-operators-px9ln" Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:08.000876 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bb4764c-4bf2-46c1-a940-0aa75a205881-catalog-content\") pod \"community-operators-px9ln\" (UID: \"3bb4764c-4bf2-46c1-a940-0aa75a205881\") " pod="openshift-marketplace/community-operators-px9ln" Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:08.000916 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5bn7\" (UniqueName: \"kubernetes.io/projected/3bb4764c-4bf2-46c1-a940-0aa75a205881-kube-api-access-k5bn7\") pod \"community-operators-px9ln\" (UID: \"3bb4764c-4bf2-46c1-a940-0aa75a205881\") " pod="openshift-marketplace/community-operators-px9ln" Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:08.001694 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bb4764c-4bf2-46c1-a940-0aa75a205881-utilities\") pod \"community-operators-px9ln\" (UID: \"3bb4764c-4bf2-46c1-a940-0aa75a205881\") " pod="openshift-marketplace/community-operators-px9ln" Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:08.001949 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bb4764c-4bf2-46c1-a940-0aa75a205881-catalog-content\") pod \"community-operators-px9ln\" (UID: \"3bb4764c-4bf2-46c1-a940-0aa75a205881\") " pod="openshift-marketplace/community-operators-px9ln" Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:08.104915 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5bn7\" (UniqueName: \"kubernetes.io/projected/3bb4764c-4bf2-46c1-a940-0aa75a205881-kube-api-access-k5bn7\") pod \"community-operators-px9ln\" (UID: \"3bb4764c-4bf2-46c1-a940-0aa75a205881\") " pod="openshift-marketplace/community-operators-px9ln" Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:08.116691 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-px9ln" Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:08.149502 4680 generic.go:334] "Generic (PLEG): container finished" podID="88dae8bc-f19a-4b03-9443-19af5340b54e" containerID="09fc4f6bb80cf5519d8bab389e1a3c104fb6c42165d43c180ceec56319f9e241" exitCode=0 Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:08.149578 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b96s8" event={"ID":"88dae8bc-f19a-4b03-9443-19af5340b54e","Type":"ContainerDied","Data":"09fc4f6bb80cf5519d8bab389e1a3c104fb6c42165d43c180ceec56319f9e241"} Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:08.156070 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc5x2" event={"ID":"36277038-c2ae-486f-aa6d-ac179eb779a5","Type":"ContainerStarted","Data":"c8298f50c98edfd8e780ced8dc6583c425a274a963c884537ab7d00902c768dd"} Feb 17 17:50:08 crc kubenswrapper[4680]: I0217 17:50:08.850980 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-px9ln"] Feb 17 17:50:09 crc kubenswrapper[4680]: I0217 17:50:09.162217 4680 generic.go:334] "Generic (PLEG): container finished" podID="36277038-c2ae-486f-aa6d-ac179eb779a5" containerID="c8298f50c98edfd8e780ced8dc6583c425a274a963c884537ab7d00902c768dd" exitCode=0 Feb 17 17:50:09 crc kubenswrapper[4680]: I0217 17:50:09.162355 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc5x2" event={"ID":"36277038-c2ae-486f-aa6d-ac179eb779a5","Type":"ContainerDied","Data":"c8298f50c98edfd8e780ced8dc6583c425a274a963c884537ab7d00902c768dd"} Feb 17 17:50:09 crc kubenswrapper[4680]: I0217 17:50:09.164587 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lssp" event={"ID":"2660862d-dbb8-465e-a47d-91dab06d81cb","Type":"ContainerStarted","Data":"8bbf148d1281750149589182b5040ee8501c897dd650ee9644342496b4910d7f"} Feb 17 17:50:09 crc kubenswrapper[4680]: I0217 17:50:09.166706 4680 generic.go:334] "Generic (PLEG): container finished" podID="3bb4764c-4bf2-46c1-a940-0aa75a205881" containerID="f3ded2e5febdc440bb23483605074dfa14c5e867c8b00891eb4221315fbbae8b" exitCode=0 Feb 17 17:50:09 crc kubenswrapper[4680]: I0217 17:50:09.166732 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px9ln" event={"ID":"3bb4764c-4bf2-46c1-a940-0aa75a205881","Type":"ContainerDied","Data":"f3ded2e5febdc440bb23483605074dfa14c5e867c8b00891eb4221315fbbae8b"} Feb 17 17:50:09 crc kubenswrapper[4680]: I0217 17:50:09.166748 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px9ln" event={"ID":"3bb4764c-4bf2-46c1-a940-0aa75a205881","Type":"ContainerStarted","Data":"7b81c93cbf0769711a210f59b3e40d951ab02030a9478f6db8ba35b1fa5bdecc"} Feb 17 17:50:09 crc kubenswrapper[4680]: I0217 17:50:09.201998 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6lssp" podStartSLOduration=2.455784429 podStartE2EDuration="5.201977422s" podCreationTimestamp="2026-02-17 17:50:04 +0000 UTC" firstStartedPulling="2026-02-17 17:50:06.127114862 +0000 UTC m=+335.678013541" lastFinishedPulling="2026-02-17 17:50:08.873307855 +0000 UTC m=+338.424206534" observedRunningTime="2026-02-17 17:50:09.198168623 +0000 UTC m=+338.749067312" watchObservedRunningTime="2026-02-17 17:50:09.201977422 +0000 UTC m=+338.752876101" Feb 17 17:50:11 crc kubenswrapper[4680]: I0217 17:50:11.182389 4680 generic.go:334] "Generic (PLEG): container finished" podID="88dae8bc-f19a-4b03-9443-19af5340b54e" containerID="7a7b961da6b82d975494f3c8d590a76a9f1b43f209619e3a6dd5c4ab683268ee" exitCode=0 Feb 17 17:50:11 crc kubenswrapper[4680]: I0217 17:50:11.182434 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b96s8" event={"ID":"88dae8bc-f19a-4b03-9443-19af5340b54e","Type":"ContainerDied","Data":"7a7b961da6b82d975494f3c8d590a76a9f1b43f209619e3a6dd5c4ab683268ee"} Feb 17 17:50:12 crc kubenswrapper[4680]: I0217 17:50:12.189334 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc5x2" event={"ID":"36277038-c2ae-486f-aa6d-ac179eb779a5","Type":"ContainerStarted","Data":"7c55cce0b735af12653340a9494eb146569c5a5a5314dbed4f0c51ddacb9dfdc"} Feb 17 17:50:12 crc kubenswrapper[4680]: I0217 17:50:12.207757 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cc5x2" podStartSLOduration=2.666042503 podStartE2EDuration="8.207738256s" podCreationTimestamp="2026-02-17 17:50:04 +0000 UTC" firstStartedPulling="2026-02-17 17:50:06.127744042 +0000 UTC m=+335.678642721" lastFinishedPulling="2026-02-17 17:50:11.669439795 +0000 UTC m=+341.220338474" observedRunningTime="2026-02-17 17:50:12.205181536 +0000 UTC m=+341.756080235" watchObservedRunningTime="2026-02-17 17:50:12.207738256 +0000 UTC m=+341.758636935" Feb 17 17:50:14 crc kubenswrapper[4680]: I0217 17:50:14.087635 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-tsqms"] Feb 17 17:50:14 crc kubenswrapper[4680]: I0217 17:50:14.088136 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" podUID="fd252884-e940-4f2e-a8ca-2b1e2c5b721e" containerName="controller-manager" containerID="cri-o://9f66eed03797c51563fc6bea66d986adb604cf7892726706e4feef888a54bcd4" gracePeriod=30 Feb 17 17:50:14 crc kubenswrapper[4680]: I0217 17:50:14.093900 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c"] Feb 17 17:50:14 crc kubenswrapper[4680]: I0217 17:50:14.094677 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" podUID="59cc51cb-694e-4e18-8c20-b0f8594d26d4" containerName="route-controller-manager" containerID="cri-o://f95706ea2a22b4a67670aaf994e19a18aad989fdf831820dfa8f5509fccef4c8" gracePeriod=30 Feb 17 17:50:14 crc kubenswrapper[4680]: I0217 17:50:14.205064 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b96s8" event={"ID":"88dae8bc-f19a-4b03-9443-19af5340b54e","Type":"ContainerStarted","Data":"30ce9ebebfd6078f6ad2dc3b230b28372f29dfeef1c66357aedba54f73d5ab20"} Feb 17 17:50:14 crc kubenswrapper[4680]: I0217 17:50:14.207392 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px9ln" event={"ID":"3bb4764c-4bf2-46c1-a940-0aa75a205881","Type":"ContainerStarted","Data":"90975f8f142e8b89fba615fb19a7231e7b1118098925919bf9202d00e1158dda"} Feb 17 17:50:14 crc kubenswrapper[4680]: I0217 17:50:14.872429 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6lssp" Feb 17 17:50:14 crc kubenswrapper[4680]: I0217 17:50:14.872904 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6lssp" Feb 17 17:50:14 crc kubenswrapper[4680]: I0217 17:50:14.913057 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6lssp" Feb 17 17:50:15 crc kubenswrapper[4680]: I0217 17:50:15.067573 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cc5x2" Feb 17 17:50:15 crc kubenswrapper[4680]: I0217 17:50:15.068504 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cc5x2" Feb 17 17:50:15 crc kubenswrapper[4680]: I0217 17:50:15.214061 4680 generic.go:334] "Generic (PLEG): container finished" podID="3bb4764c-4bf2-46c1-a940-0aa75a205881" containerID="90975f8f142e8b89fba615fb19a7231e7b1118098925919bf9202d00e1158dda" exitCode=0 Feb 17 17:50:15 crc kubenswrapper[4680]: I0217 17:50:15.214699 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px9ln" event={"ID":"3bb4764c-4bf2-46c1-a940-0aa75a205881","Type":"ContainerDied","Data":"90975f8f142e8b89fba615fb19a7231e7b1118098925919bf9202d00e1158dda"} Feb 17 17:50:15 crc kubenswrapper[4680]: I0217 17:50:15.235243 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b96s8" podStartSLOduration=4.624902172 podStartE2EDuration="9.235223198s" podCreationTimestamp="2026-02-17 17:50:06 +0000 UTC" firstStartedPulling="2026-02-17 17:50:08.151759031 +0000 UTC m=+337.702657710" lastFinishedPulling="2026-02-17 17:50:12.762080057 +0000 UTC m=+342.312978736" observedRunningTime="2026-02-17 17:50:15.233175895 +0000 UTC m=+344.784074594" watchObservedRunningTime="2026-02-17 17:50:15.235223198 +0000 UTC m=+344.786121877" Feb 17 17:50:15 crc kubenswrapper[4680]: I0217 17:50:15.253981 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6lssp" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.110590 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cc5x2" podUID="36277038-c2ae-486f-aa6d-ac179eb779a5" containerName="registry-server" probeResult="failure" output=< Feb 17 17:50:16 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 17:50:16 crc kubenswrapper[4680]: > Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.234625 4680 generic.go:334] "Generic (PLEG): container finished" podID="59cc51cb-694e-4e18-8c20-b0f8594d26d4" containerID="f95706ea2a22b4a67670aaf994e19a18aad989fdf831820dfa8f5509fccef4c8" exitCode=0 Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.234714 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" event={"ID":"59cc51cb-694e-4e18-8c20-b0f8594d26d4","Type":"ContainerDied","Data":"f95706ea2a22b4a67670aaf994e19a18aad989fdf831820dfa8f5509fccef4c8"} Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.245638 4680 generic.go:334] "Generic (PLEG): container finished" podID="fd252884-e940-4f2e-a8ca-2b1e2c5b721e" containerID="9f66eed03797c51563fc6bea66d986adb604cf7892726706e4feef888a54bcd4" exitCode=0 Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.245720 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" event={"ID":"fd252884-e940-4f2e-a8ca-2b1e2c5b721e","Type":"ContainerDied","Data":"9f66eed03797c51563fc6bea66d986adb604cf7892726706e4feef888a54bcd4"} Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.260032 4680 patch_prober.go:28] interesting pod/controller-manager-86489848d5-tsqms container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.260101 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" podUID="fd252884-e940-4f2e-a8ca-2b1e2c5b721e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.477282 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.481975 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.501383 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6df77d6fb4-5wskw"] Feb 17 17:50:16 crc kubenswrapper[4680]: E0217 17:50:16.501838 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd252884-e940-4f2e-a8ca-2b1e2c5b721e" containerName="controller-manager" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.501913 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd252884-e940-4f2e-a8ca-2b1e2c5b721e" containerName="controller-manager" Feb 17 17:50:16 crc kubenswrapper[4680]: E0217 17:50:16.501983 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59cc51cb-694e-4e18-8c20-b0f8594d26d4" containerName="route-controller-manager" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.502041 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="59cc51cb-694e-4e18-8c20-b0f8594d26d4" containerName="route-controller-manager" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.502175 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd252884-e940-4f2e-a8ca-2b1e2c5b721e" containerName="controller-manager" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.502286 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="59cc51cb-694e-4e18-8c20-b0f8594d26d4" containerName="route-controller-manager" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.505583 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.518811 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6df77d6fb4-5wskw"] Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.619916 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59cc51cb-694e-4e18-8c20-b0f8594d26d4-client-ca\") pod \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\" (UID: \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\") " Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.619994 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59cc51cb-694e-4e18-8c20-b0f8594d26d4-serving-cert\") pod \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\" (UID: \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\") " Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.620046 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-serving-cert\") pod \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.620082 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-config\") pod \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.620108 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h4c7\" (UniqueName: \"kubernetes.io/projected/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-kube-api-access-4h4c7\") pod \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.620155 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59cc51cb-694e-4e18-8c20-b0f8594d26d4-config\") pod \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\" (UID: \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\") " Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.620182 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-proxy-ca-bundles\") pod \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.620243 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-client-ca\") pod \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\" (UID: \"fd252884-e940-4f2e-a8ca-2b1e2c5b721e\") " Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.620267 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcgzr\" (UniqueName: \"kubernetes.io/projected/59cc51cb-694e-4e18-8c20-b0f8594d26d4-kube-api-access-gcgzr\") pod \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\" (UID: \"59cc51cb-694e-4e18-8c20-b0f8594d26d4\") " Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.620456 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cecbc156-5aad-41ec-b86c-840856e96e1c-config\") pod \"controller-manager-6df77d6fb4-5wskw\" (UID: \"cecbc156-5aad-41ec-b86c-840856e96e1c\") " pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.620499 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cecbc156-5aad-41ec-b86c-840856e96e1c-client-ca\") pod \"controller-manager-6df77d6fb4-5wskw\" (UID: \"cecbc156-5aad-41ec-b86c-840856e96e1c\") " pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.620534 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cecbc156-5aad-41ec-b86c-840856e96e1c-proxy-ca-bundles\") pod \"controller-manager-6df77d6fb4-5wskw\" (UID: \"cecbc156-5aad-41ec-b86c-840856e96e1c\") " pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.620580 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cecbc156-5aad-41ec-b86c-840856e96e1c-serving-cert\") pod \"controller-manager-6df77d6fb4-5wskw\" (UID: \"cecbc156-5aad-41ec-b86c-840856e96e1c\") " pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.620607 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9lnv\" (UniqueName: \"kubernetes.io/projected/cecbc156-5aad-41ec-b86c-840856e96e1c-kube-api-access-g9lnv\") pod \"controller-manager-6df77d6fb4-5wskw\" (UID: \"cecbc156-5aad-41ec-b86c-840856e96e1c\") " pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.620758 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59cc51cb-694e-4e18-8c20-b0f8594d26d4-client-ca" (OuterVolumeSpecName: "client-ca") pod "59cc51cb-694e-4e18-8c20-b0f8594d26d4" (UID: "59cc51cb-694e-4e18-8c20-b0f8594d26d4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.620773 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59cc51cb-694e-4e18-8c20-b0f8594d26d4-config" (OuterVolumeSpecName: "config") pod "59cc51cb-694e-4e18-8c20-b0f8594d26d4" (UID: "59cc51cb-694e-4e18-8c20-b0f8594d26d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.620816 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fd252884-e940-4f2e-a8ca-2b1e2c5b721e" (UID: "fd252884-e940-4f2e-a8ca-2b1e2c5b721e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.620920 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-config" (OuterVolumeSpecName: "config") pod "fd252884-e940-4f2e-a8ca-2b1e2c5b721e" (UID: "fd252884-e940-4f2e-a8ca-2b1e2c5b721e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.620980 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-client-ca" (OuterVolumeSpecName: "client-ca") pod "fd252884-e940-4f2e-a8ca-2b1e2c5b721e" (UID: "fd252884-e940-4f2e-a8ca-2b1e2c5b721e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.625348 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59cc51cb-694e-4e18-8c20-b0f8594d26d4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "59cc51cb-694e-4e18-8c20-b0f8594d26d4" (UID: "59cc51cb-694e-4e18-8c20-b0f8594d26d4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.625443 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59cc51cb-694e-4e18-8c20-b0f8594d26d4-kube-api-access-gcgzr" (OuterVolumeSpecName: "kube-api-access-gcgzr") pod "59cc51cb-694e-4e18-8c20-b0f8594d26d4" (UID: "59cc51cb-694e-4e18-8c20-b0f8594d26d4"). InnerVolumeSpecName "kube-api-access-gcgzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.626728 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fd252884-e940-4f2e-a8ca-2b1e2c5b721e" (UID: "fd252884-e940-4f2e-a8ca-2b1e2c5b721e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.626845 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-kube-api-access-4h4c7" (OuterVolumeSpecName: "kube-api-access-4h4c7") pod "fd252884-e940-4f2e-a8ca-2b1e2c5b721e" (UID: "fd252884-e940-4f2e-a8ca-2b1e2c5b721e"). InnerVolumeSpecName "kube-api-access-4h4c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.666955 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b96s8" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.667260 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b96s8" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.711746 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b96s8" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.721899 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cecbc156-5aad-41ec-b86c-840856e96e1c-config\") pod \"controller-manager-6df77d6fb4-5wskw\" (UID: \"cecbc156-5aad-41ec-b86c-840856e96e1c\") " pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.721959 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cecbc156-5aad-41ec-b86c-840856e96e1c-client-ca\") pod \"controller-manager-6df77d6fb4-5wskw\" (UID: \"cecbc156-5aad-41ec-b86c-840856e96e1c\") " pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.721998 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cecbc156-5aad-41ec-b86c-840856e96e1c-proxy-ca-bundles\") pod \"controller-manager-6df77d6fb4-5wskw\" (UID: \"cecbc156-5aad-41ec-b86c-840856e96e1c\") " pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.722051 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cecbc156-5aad-41ec-b86c-840856e96e1c-serving-cert\") pod \"controller-manager-6df77d6fb4-5wskw\" (UID: \"cecbc156-5aad-41ec-b86c-840856e96e1c\") " pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.722074 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9lnv\" (UniqueName: \"kubernetes.io/projected/cecbc156-5aad-41ec-b86c-840856e96e1c-kube-api-access-g9lnv\") pod \"controller-manager-6df77d6fb4-5wskw\" (UID: \"cecbc156-5aad-41ec-b86c-840856e96e1c\") " pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.722120 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.722133 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4h4c7\" (UniqueName: \"kubernetes.io/projected/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-kube-api-access-4h4c7\") on node \"crc\" DevicePath \"\"" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.722143 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59cc51cb-694e-4e18-8c20-b0f8594d26d4-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.722154 4680 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.722166 4680 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.722178 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcgzr\" (UniqueName: \"kubernetes.io/projected/59cc51cb-694e-4e18-8c20-b0f8594d26d4-kube-api-access-gcgzr\") on node \"crc\" DevicePath \"\"" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.722189 4680 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59cc51cb-694e-4e18-8c20-b0f8594d26d4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.722201 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59cc51cb-694e-4e18-8c20-b0f8594d26d4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.722211 4680 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd252884-e940-4f2e-a8ca-2b1e2c5b721e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.723079 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cecbc156-5aad-41ec-b86c-840856e96e1c-client-ca\") pod \"controller-manager-6df77d6fb4-5wskw\" (UID: \"cecbc156-5aad-41ec-b86c-840856e96e1c\") " pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.723672 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cecbc156-5aad-41ec-b86c-840856e96e1c-config\") pod \"controller-manager-6df77d6fb4-5wskw\" (UID: \"cecbc156-5aad-41ec-b86c-840856e96e1c\") " pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.725289 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cecbc156-5aad-41ec-b86c-840856e96e1c-serving-cert\") pod \"controller-manager-6df77d6fb4-5wskw\" (UID: \"cecbc156-5aad-41ec-b86c-840856e96e1c\") " pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.725821 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cecbc156-5aad-41ec-b86c-840856e96e1c-proxy-ca-bundles\") pod \"controller-manager-6df77d6fb4-5wskw\" (UID: \"cecbc156-5aad-41ec-b86c-840856e96e1c\") " pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.741984 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9lnv\" (UniqueName: \"kubernetes.io/projected/cecbc156-5aad-41ec-b86c-840856e96e1c-kube-api-access-g9lnv\") pod \"controller-manager-6df77d6fb4-5wskw\" (UID: \"cecbc156-5aad-41ec-b86c-840856e96e1c\") " pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:16 crc kubenswrapper[4680]: I0217 17:50:16.820264 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.012768 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6df77d6fb4-5wskw"] Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.251071 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" event={"ID":"cecbc156-5aad-41ec-b86c-840856e96e1c","Type":"ContainerStarted","Data":"bd2e12d4a1d745a33aaa91a275b90570c3ca07797dee0eba18e6a1993ef691aa"} Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.251401 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.251427 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" event={"ID":"cecbc156-5aad-41ec-b86c-840856e96e1c","Type":"ContainerStarted","Data":"e99d3794c93fef1f45c8473e2ab2e9df8317394be323392fddab267e7472b1ec"} Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.252483 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" event={"ID":"fd252884-e940-4f2e-a8ca-2b1e2c5b721e","Type":"ContainerDied","Data":"9eeb3ab63a920f4cad1f264079a8bf78bfc9b93884cc585cc1293ba2296d7fb4"} Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.252502 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86489848d5-tsqms" Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.252530 4680 scope.go:117] "RemoveContainer" containerID="9f66eed03797c51563fc6bea66d986adb604cf7892726706e4feef888a54bcd4" Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.254948 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px9ln" event={"ID":"3bb4764c-4bf2-46c1-a940-0aa75a205881","Type":"ContainerStarted","Data":"799f0c7a628806f76a7a230df6b13575c6fef0c943be1eee639b079d2758c3a3"} Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.257267 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.257726 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" event={"ID":"59cc51cb-694e-4e18-8c20-b0f8594d26d4","Type":"ContainerDied","Data":"764132271a8cba2b772c24157e2cc5a949d2646da15c4b4f72f38d7f153da432"} Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.257787 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c" Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.265893 4680 scope.go:117] "RemoveContainer" containerID="f95706ea2a22b4a67670aaf994e19a18aad989fdf831820dfa8f5509fccef4c8" Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.303889 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6df77d6fb4-5wskw" podStartSLOduration=3.303868268 podStartE2EDuration="3.303868268s" podCreationTimestamp="2026-02-17 17:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:50:17.280665544 +0000 UTC m=+346.831564233" watchObservedRunningTime="2026-02-17 17:50:17.303868268 +0000 UTC m=+346.854766957" Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.306708 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c"] Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.311581 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-zrh2c"] Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.338764 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-tsqms"] Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.344037 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-tsqms"] Feb 17 17:50:17 crc kubenswrapper[4680]: I0217 17:50:17.362337 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-px9ln" podStartSLOduration=3.295663457 podStartE2EDuration="10.362300719s" podCreationTimestamp="2026-02-17 17:50:07 +0000 UTC" firstStartedPulling="2026-02-17 17:50:09.168386805 +0000 UTC m=+338.719285484" lastFinishedPulling="2026-02-17 17:50:16.235024057 +0000 UTC m=+345.785922746" observedRunningTime="2026-02-17 17:50:17.358929004 +0000 UTC m=+346.909827683" watchObservedRunningTime="2026-02-17 17:50:17.362300719 +0000 UTC m=+346.913199398" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.117346 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-px9ln" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.117392 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-px9ln" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.147560 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9r67k"] Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.148213 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.169387 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9r67k"] Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.239342 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.239552 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-bound-sa-token\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.239691 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-trusted-ca\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.239738 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.239764 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-registry-certificates\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.239836 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs76b\" (UniqueName: \"kubernetes.io/projected/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-kube-api-access-fs76b\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.239904 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.239948 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-registry-tls\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.267211 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.317958 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b96s8" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.340671 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-trusted-ca\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.340727 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.340754 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-registry-certificates\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.340789 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs76b\" (UniqueName: \"kubernetes.io/projected/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-kube-api-access-fs76b\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.340822 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-registry-tls\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.340844 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.340890 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-bound-sa-token\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.341815 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.342372 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-trusted-ca\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.343052 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-registry-certificates\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.350137 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.382347 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-registry-tls\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.399530 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-bound-sa-token\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.405995 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs76b\" (UniqueName: \"kubernetes.io/projected/35c84a5e-2ffb-4d83-88d1-d84bf70506c1-kube-api-access-fs76b\") pod \"image-registry-66df7c8f76-9r67k\" (UID: \"35c84a5e-2ffb-4d83-88d1-d84bf70506c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.463398 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.667581 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9r67k"] Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.691830 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x"] Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.692719 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.696508 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.696685 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.697094 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.697460 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.698625 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.699532 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.721665 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x"] Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.848269 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc57d76f-7c0a-4657-9b7d-6216f96b1374-client-ca\") pod \"route-controller-manager-75977b69fc-bn26x\" (UID: \"fc57d76f-7c0a-4657-9b7d-6216f96b1374\") " pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.848340 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc57d76f-7c0a-4657-9b7d-6216f96b1374-serving-cert\") pod \"route-controller-manager-75977b69fc-bn26x\" (UID: \"fc57d76f-7c0a-4657-9b7d-6216f96b1374\") " pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.848374 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr4q7\" (UniqueName: \"kubernetes.io/projected/fc57d76f-7c0a-4657-9b7d-6216f96b1374-kube-api-access-vr4q7\") pod \"route-controller-manager-75977b69fc-bn26x\" (UID: \"fc57d76f-7c0a-4657-9b7d-6216f96b1374\") " pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.848479 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc57d76f-7c0a-4657-9b7d-6216f96b1374-config\") pod \"route-controller-manager-75977b69fc-bn26x\" (UID: \"fc57d76f-7c0a-4657-9b7d-6216f96b1374\") " pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.949059 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc57d76f-7c0a-4657-9b7d-6216f96b1374-serving-cert\") pod \"route-controller-manager-75977b69fc-bn26x\" (UID: \"fc57d76f-7c0a-4657-9b7d-6216f96b1374\") " pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.949101 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr4q7\" (UniqueName: \"kubernetes.io/projected/fc57d76f-7c0a-4657-9b7d-6216f96b1374-kube-api-access-vr4q7\") pod \"route-controller-manager-75977b69fc-bn26x\" (UID: \"fc57d76f-7c0a-4657-9b7d-6216f96b1374\") " pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.949183 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc57d76f-7c0a-4657-9b7d-6216f96b1374-config\") pod \"route-controller-manager-75977b69fc-bn26x\" (UID: \"fc57d76f-7c0a-4657-9b7d-6216f96b1374\") " pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.949215 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc57d76f-7c0a-4657-9b7d-6216f96b1374-client-ca\") pod \"route-controller-manager-75977b69fc-bn26x\" (UID: \"fc57d76f-7c0a-4657-9b7d-6216f96b1374\") " pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.950432 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc57d76f-7c0a-4657-9b7d-6216f96b1374-client-ca\") pod \"route-controller-manager-75977b69fc-bn26x\" (UID: \"fc57d76f-7c0a-4657-9b7d-6216f96b1374\") " pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.950913 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc57d76f-7c0a-4657-9b7d-6216f96b1374-config\") pod \"route-controller-manager-75977b69fc-bn26x\" (UID: \"fc57d76f-7c0a-4657-9b7d-6216f96b1374\") " pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.955920 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc57d76f-7c0a-4657-9b7d-6216f96b1374-serving-cert\") pod \"route-controller-manager-75977b69fc-bn26x\" (UID: \"fc57d76f-7c0a-4657-9b7d-6216f96b1374\") " pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" Feb 17 17:50:18 crc kubenswrapper[4680]: I0217 17:50:18.967058 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr4q7\" (UniqueName: \"kubernetes.io/projected/fc57d76f-7c0a-4657-9b7d-6216f96b1374-kube-api-access-vr4q7\") pod \"route-controller-manager-75977b69fc-bn26x\" (UID: \"fc57d76f-7c0a-4657-9b7d-6216f96b1374\") " pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" Feb 17 17:50:19 crc kubenswrapper[4680]: I0217 17:50:19.007286 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" Feb 17 17:50:19 crc kubenswrapper[4680]: I0217 17:50:19.111273 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59cc51cb-694e-4e18-8c20-b0f8594d26d4" path="/var/lib/kubelet/pods/59cc51cb-694e-4e18-8c20-b0f8594d26d4/volumes" Feb 17 17:50:19 crc kubenswrapper[4680]: I0217 17:50:19.112019 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd252884-e940-4f2e-a8ca-2b1e2c5b721e" path="/var/lib/kubelet/pods/fd252884-e940-4f2e-a8ca-2b1e2c5b721e/volumes" Feb 17 17:50:19 crc kubenswrapper[4680]: I0217 17:50:19.176176 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-px9ln" podUID="3bb4764c-4bf2-46c1-a940-0aa75a205881" containerName="registry-server" probeResult="failure" output=< Feb 17 17:50:19 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 17:50:19 crc kubenswrapper[4680]: > Feb 17 17:50:19 crc kubenswrapper[4680]: I0217 17:50:19.284441 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" event={"ID":"35c84a5e-2ffb-4d83-88d1-d84bf70506c1","Type":"ContainerStarted","Data":"0ad61e6776bf73e0827b1b3a43fbf40089962666b1d26b72ed4231c4e868cdbc"} Feb 17 17:50:19 crc kubenswrapper[4680]: I0217 17:50:19.284752 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" event={"ID":"35c84a5e-2ffb-4d83-88d1-d84bf70506c1","Type":"ContainerStarted","Data":"9c30a31c7ac50e11e6735db1dbcfe6232ddd681c99cc71c1a2c453c9393a100b"} Feb 17 17:50:19 crc kubenswrapper[4680]: I0217 17:50:19.284887 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:19 crc kubenswrapper[4680]: I0217 17:50:19.301658 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" podStartSLOduration=1.301638568 podStartE2EDuration="1.301638568s" podCreationTimestamp="2026-02-17 17:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:50:19.297781788 +0000 UTC m=+348.848680477" watchObservedRunningTime="2026-02-17 17:50:19.301638568 +0000 UTC m=+348.852537257" Feb 17 17:50:19 crc kubenswrapper[4680]: I0217 17:50:19.397410 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x"] Feb 17 17:50:19 crc kubenswrapper[4680]: W0217 17:50:19.399444 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc57d76f_7c0a_4657_9b7d_6216f96b1374.slice/crio-a33332daf7d2642b67a4d7378d965ba8d92f7608197384cf75f80eb092d111f6 WatchSource:0}: Error finding container a33332daf7d2642b67a4d7378d965ba8d92f7608197384cf75f80eb092d111f6: Status 404 returned error can't find the container with id a33332daf7d2642b67a4d7378d965ba8d92f7608197384cf75f80eb092d111f6 Feb 17 17:50:20 crc kubenswrapper[4680]: I0217 17:50:20.289691 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" event={"ID":"fc57d76f-7c0a-4657-9b7d-6216f96b1374","Type":"ContainerStarted","Data":"966012b10ba8de505b260fdba91964650707a0f2b40d4d0ba0e9da4af742d12d"} Feb 17 17:50:20 crc kubenswrapper[4680]: I0217 17:50:20.290135 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" event={"ID":"fc57d76f-7c0a-4657-9b7d-6216f96b1374","Type":"ContainerStarted","Data":"a33332daf7d2642b67a4d7378d965ba8d92f7608197384cf75f80eb092d111f6"} Feb 17 17:50:20 crc kubenswrapper[4680]: I0217 17:50:20.309224 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" podStartSLOduration=6.309193378 podStartE2EDuration="6.309193378s" podCreationTimestamp="2026-02-17 17:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:50:20.307154275 +0000 UTC m=+349.858052954" watchObservedRunningTime="2026-02-17 17:50:20.309193378 +0000 UTC m=+349.860092097" Feb 17 17:50:21 crc kubenswrapper[4680]: I0217 17:50:21.295272 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" Feb 17 17:50:21 crc kubenswrapper[4680]: I0217 17:50:21.302337 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75977b69fc-bn26x" Feb 17 17:50:25 crc kubenswrapper[4680]: I0217 17:50:25.103376 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cc5x2" Feb 17 17:50:25 crc kubenswrapper[4680]: I0217 17:50:25.145718 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cc5x2" Feb 17 17:50:28 crc kubenswrapper[4680]: I0217 17:50:28.152758 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-px9ln" Feb 17 17:50:28 crc kubenswrapper[4680]: I0217 17:50:28.193480 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-px9ln" Feb 17 17:50:35 crc kubenswrapper[4680]: I0217 17:50:35.588817 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:50:35 crc kubenswrapper[4680]: I0217 17:50:35.589297 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:50:38 crc kubenswrapper[4680]: I0217 17:50:38.469043 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-9r67k" Feb 17 17:50:38 crc kubenswrapper[4680]: I0217 17:50:38.523725 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z8mj5"] Feb 17 17:51:03 crc kubenswrapper[4680]: I0217 17:51:03.569793 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" podUID="1c640310-31cf-4feb-a173-5688042c6b6d" containerName="registry" containerID="cri-o://7993783c9a0bb724bc260f68469d98e67a7adf76ae25457aa001f3d42da5ed9d" gracePeriod=30 Feb 17 17:51:03 crc kubenswrapper[4680]: I0217 17:51:03.918247 4680 generic.go:334] "Generic (PLEG): container finished" podID="1c640310-31cf-4feb-a173-5688042c6b6d" containerID="7993783c9a0bb724bc260f68469d98e67a7adf76ae25457aa001f3d42da5ed9d" exitCode=0 Feb 17 17:51:03 crc kubenswrapper[4680]: I0217 17:51:03.918384 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" event={"ID":"1c640310-31cf-4feb-a173-5688042c6b6d","Type":"ContainerDied","Data":"7993783c9a0bb724bc260f68469d98e67a7adf76ae25457aa001f3d42da5ed9d"} Feb 17 17:51:03 crc kubenswrapper[4680]: I0217 17:51:03.972901 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.072917 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c640310-31cf-4feb-a173-5688042c6b6d-installation-pull-secrets\") pod \"1c640310-31cf-4feb-a173-5688042c6b6d\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.073026 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-bound-sa-token\") pod \"1c640310-31cf-4feb-a173-5688042c6b6d\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.073046 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c640310-31cf-4feb-a173-5688042c6b6d-registry-certificates\") pod \"1c640310-31cf-4feb-a173-5688042c6b6d\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.073245 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"1c640310-31cf-4feb-a173-5688042c6b6d\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.073282 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c640310-31cf-4feb-a173-5688042c6b6d-ca-trust-extracted\") pod \"1c640310-31cf-4feb-a173-5688042c6b6d\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.073305 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgczf\" (UniqueName: \"kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-kube-api-access-kgczf\") pod \"1c640310-31cf-4feb-a173-5688042c6b6d\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.073367 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c640310-31cf-4feb-a173-5688042c6b6d-trusted-ca\") pod \"1c640310-31cf-4feb-a173-5688042c6b6d\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.073398 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-registry-tls\") pod \"1c640310-31cf-4feb-a173-5688042c6b6d\" (UID: \"1c640310-31cf-4feb-a173-5688042c6b6d\") " Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.074246 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c640310-31cf-4feb-a173-5688042c6b6d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "1c640310-31cf-4feb-a173-5688042c6b6d" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.074373 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c640310-31cf-4feb-a173-5688042c6b6d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "1c640310-31cf-4feb-a173-5688042c6b6d" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.074626 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c640310-31cf-4feb-a173-5688042c6b6d-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.074715 4680 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c640310-31cf-4feb-a173-5688042c6b6d-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.080079 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "1c640310-31cf-4feb-a173-5688042c6b6d" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.080355 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c640310-31cf-4feb-a173-5688042c6b6d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "1c640310-31cf-4feb-a173-5688042c6b6d" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.080431 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-kube-api-access-kgczf" (OuterVolumeSpecName: "kube-api-access-kgczf") pod "1c640310-31cf-4feb-a173-5688042c6b6d" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d"). InnerVolumeSpecName "kube-api-access-kgczf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.081071 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "1c640310-31cf-4feb-a173-5688042c6b6d" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.084949 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "1c640310-31cf-4feb-a173-5688042c6b6d" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.090359 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c640310-31cf-4feb-a173-5688042c6b6d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "1c640310-31cf-4feb-a173-5688042c6b6d" (UID: "1c640310-31cf-4feb-a173-5688042c6b6d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.175380 4680 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.175422 4680 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c640310-31cf-4feb-a173-5688042c6b6d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.175433 4680 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.175442 4680 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c640310-31cf-4feb-a173-5688042c6b6d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.175450 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgczf\" (UniqueName: \"kubernetes.io/projected/1c640310-31cf-4feb-a173-5688042c6b6d-kube-api-access-kgczf\") on node \"crc\" DevicePath \"\"" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.923441 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" event={"ID":"1c640310-31cf-4feb-a173-5688042c6b6d","Type":"ContainerDied","Data":"7cc1a306007aba064311e0e3df1d90d3dff699860a758162b7f27493230e0246"} Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.923488 4680 scope.go:117] "RemoveContainer" containerID="7993783c9a0bb724bc260f68469d98e67a7adf76ae25457aa001f3d42da5ed9d" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.923573 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-z8mj5" Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.957487 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z8mj5"] Feb 17 17:51:04 crc kubenswrapper[4680]: I0217 17:51:04.958052 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z8mj5"] Feb 17 17:51:05 crc kubenswrapper[4680]: I0217 17:51:05.111231 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c640310-31cf-4feb-a173-5688042c6b6d" path="/var/lib/kubelet/pods/1c640310-31cf-4feb-a173-5688042c6b6d/volumes" Feb 17 17:51:05 crc kubenswrapper[4680]: I0217 17:51:05.588689 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:51:05 crc kubenswrapper[4680]: I0217 17:51:05.589009 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:51:05 crc kubenswrapper[4680]: I0217 17:51:05.589051 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:51:05 crc kubenswrapper[4680]: I0217 17:51:05.589624 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"edd5ddf683c1c79e1cf067d82d5193107707475d2e8243197b201d5f7d01d27d"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:51:05 crc kubenswrapper[4680]: I0217 17:51:05.589685 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://edd5ddf683c1c79e1cf067d82d5193107707475d2e8243197b201d5f7d01d27d" gracePeriod=600 Feb 17 17:51:05 crc kubenswrapper[4680]: I0217 17:51:05.935521 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="edd5ddf683c1c79e1cf067d82d5193107707475d2e8243197b201d5f7d01d27d" exitCode=0 Feb 17 17:51:05 crc kubenswrapper[4680]: I0217 17:51:05.935571 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"edd5ddf683c1c79e1cf067d82d5193107707475d2e8243197b201d5f7d01d27d"} Feb 17 17:51:05 crc kubenswrapper[4680]: I0217 17:51:05.935619 4680 scope.go:117] "RemoveContainer" containerID="aa4741b4166ffdf535596b13f3d82c84594ed9169d61dc0ef59c6b62c1d1e8d6" Feb 17 17:51:06 crc kubenswrapper[4680]: I0217 17:51:06.942504 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"4c4a97f59a07acf950ba570f64f5efedeae0f5884b8819e8f9957b63b9ada29d"} Feb 17 17:53:05 crc kubenswrapper[4680]: I0217 17:53:05.588733 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:53:05 crc kubenswrapper[4680]: I0217 17:53:05.589218 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:53:35 crc kubenswrapper[4680]: I0217 17:53:35.589211 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:53:35 crc kubenswrapper[4680]: I0217 17:53:35.590441 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:54:05 crc kubenswrapper[4680]: I0217 17:54:05.588119 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:54:05 crc kubenswrapper[4680]: I0217 17:54:05.588578 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:54:05 crc kubenswrapper[4680]: I0217 17:54:05.588620 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:54:05 crc kubenswrapper[4680]: I0217 17:54:05.589125 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c4a97f59a07acf950ba570f64f5efedeae0f5884b8819e8f9957b63b9ada29d"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:54:05 crc kubenswrapper[4680]: I0217 17:54:05.589175 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://4c4a97f59a07acf950ba570f64f5efedeae0f5884b8819e8f9957b63b9ada29d" gracePeriod=600 Feb 17 17:54:05 crc kubenswrapper[4680]: I0217 17:54:05.948934 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="4c4a97f59a07acf950ba570f64f5efedeae0f5884b8819e8f9957b63b9ada29d" exitCode=0 Feb 17 17:54:05 crc kubenswrapper[4680]: I0217 17:54:05.949122 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"4c4a97f59a07acf950ba570f64f5efedeae0f5884b8819e8f9957b63b9ada29d"} Feb 17 17:54:05 crc kubenswrapper[4680]: I0217 17:54:05.949229 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"b2c9e5cbe14222efd0ffdc11f960ea7f538fdce4e27dc297816e521875a655d0"} Feb 17 17:54:05 crc kubenswrapper[4680]: I0217 17:54:05.949249 4680 scope.go:117] "RemoveContainer" containerID="edd5ddf683c1c79e1cf067d82d5193107707475d2e8243197b201d5f7d01d27d" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.266078 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-r5b7s"] Feb 17 17:54:48 crc kubenswrapper[4680]: E0217 17:54:48.266788 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c640310-31cf-4feb-a173-5688042c6b6d" containerName="registry" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.266800 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c640310-31cf-4feb-a173-5688042c6b6d" containerName="registry" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.266894 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c640310-31cf-4feb-a173-5688042c6b6d" containerName="registry" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.267257 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-r5b7s" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.269840 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.270416 4680 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-xgkkl" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.270811 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.276877 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-6mmb4"] Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.277977 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-6mmb4" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.282017 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-r5b7s"] Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.285537 4680 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-s74cz" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.297499 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-6mmb4"] Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.330205 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28vpt\" (UniqueName: \"kubernetes.io/projected/07caac96-9977-4dd7-9220-4ba6d1934a4c-kube-api-access-28vpt\") pod \"cert-manager-858654f9db-6mmb4\" (UID: \"07caac96-9977-4dd7-9220-4ba6d1934a4c\") " pod="cert-manager/cert-manager-858654f9db-6mmb4" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.330514 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jlwt\" (UniqueName: \"kubernetes.io/projected/0b2cbbee-856e-43b8-afa2-e3cbd86cf3cc-kube-api-access-4jlwt\") pod \"cert-manager-cainjector-cf98fcc89-r5b7s\" (UID: \"0b2cbbee-856e-43b8-afa2-e3cbd86cf3cc\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-r5b7s" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.340358 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-h4nrt"] Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.341180 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-h4nrt" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.344705 4680 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-xh9vb" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.358983 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-h4nrt"] Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.431434 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d69f4\" (UniqueName: \"kubernetes.io/projected/98cd155c-93dd-44c8-b05b-b3375c0eb430-kube-api-access-d69f4\") pod \"cert-manager-webhook-687f57d79b-h4nrt\" (UID: \"98cd155c-93dd-44c8-b05b-b3375c0eb430\") " pod="cert-manager/cert-manager-webhook-687f57d79b-h4nrt" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.431515 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jlwt\" (UniqueName: \"kubernetes.io/projected/0b2cbbee-856e-43b8-afa2-e3cbd86cf3cc-kube-api-access-4jlwt\") pod \"cert-manager-cainjector-cf98fcc89-r5b7s\" (UID: \"0b2cbbee-856e-43b8-afa2-e3cbd86cf3cc\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-r5b7s" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.431884 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28vpt\" (UniqueName: \"kubernetes.io/projected/07caac96-9977-4dd7-9220-4ba6d1934a4c-kube-api-access-28vpt\") pod \"cert-manager-858654f9db-6mmb4\" (UID: \"07caac96-9977-4dd7-9220-4ba6d1934a4c\") " pod="cert-manager/cert-manager-858654f9db-6mmb4" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.451338 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jlwt\" (UniqueName: \"kubernetes.io/projected/0b2cbbee-856e-43b8-afa2-e3cbd86cf3cc-kube-api-access-4jlwt\") pod \"cert-manager-cainjector-cf98fcc89-r5b7s\" (UID: \"0b2cbbee-856e-43b8-afa2-e3cbd86cf3cc\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-r5b7s" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.451383 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28vpt\" (UniqueName: \"kubernetes.io/projected/07caac96-9977-4dd7-9220-4ba6d1934a4c-kube-api-access-28vpt\") pod \"cert-manager-858654f9db-6mmb4\" (UID: \"07caac96-9977-4dd7-9220-4ba6d1934a4c\") " pod="cert-manager/cert-manager-858654f9db-6mmb4" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.532903 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d69f4\" (UniqueName: \"kubernetes.io/projected/98cd155c-93dd-44c8-b05b-b3375c0eb430-kube-api-access-d69f4\") pod \"cert-manager-webhook-687f57d79b-h4nrt\" (UID: \"98cd155c-93dd-44c8-b05b-b3375c0eb430\") " pod="cert-manager/cert-manager-webhook-687f57d79b-h4nrt" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.549510 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d69f4\" (UniqueName: \"kubernetes.io/projected/98cd155c-93dd-44c8-b05b-b3375c0eb430-kube-api-access-d69f4\") pod \"cert-manager-webhook-687f57d79b-h4nrt\" (UID: \"98cd155c-93dd-44c8-b05b-b3375c0eb430\") " pod="cert-manager/cert-manager-webhook-687f57d79b-h4nrt" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.584696 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-r5b7s" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.634482 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-6mmb4" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.657953 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-h4nrt" Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.917987 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-h4nrt"] Feb 17 17:54:48 crc kubenswrapper[4680]: I0217 17:54:48.926090 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:54:49 crc kubenswrapper[4680]: I0217 17:54:49.062047 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-r5b7s"] Feb 17 17:54:49 crc kubenswrapper[4680]: I0217 17:54:49.092996 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-6mmb4"] Feb 17 17:54:49 crc kubenswrapper[4680]: W0217 17:54:49.095570 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07caac96_9977_4dd7_9220_4ba6d1934a4c.slice/crio-0d2bdbd28fd7f52baf513a44a68e7f2fa2ec8189f015926677019cb2e71c5a87 WatchSource:0}: Error finding container 0d2bdbd28fd7f52baf513a44a68e7f2fa2ec8189f015926677019cb2e71c5a87: Status 404 returned error can't find the container with id 0d2bdbd28fd7f52baf513a44a68e7f2fa2ec8189f015926677019cb2e71c5a87 Feb 17 17:54:49 crc kubenswrapper[4680]: I0217 17:54:49.176753 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-h4nrt" event={"ID":"98cd155c-93dd-44c8-b05b-b3375c0eb430","Type":"ContainerStarted","Data":"e7ce77a699a5ab768b12ce4f8fe998193e293ec248c7b32195851167b9fde43b"} Feb 17 17:54:49 crc kubenswrapper[4680]: I0217 17:54:49.177933 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-r5b7s" event={"ID":"0b2cbbee-856e-43b8-afa2-e3cbd86cf3cc","Type":"ContainerStarted","Data":"1507a7f3632ab856883ab45f9f1ad48a8bcc7ecdaee8fd58b1ebee03cc2abe40"} Feb 17 17:54:49 crc kubenswrapper[4680]: I0217 17:54:49.178743 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-6mmb4" event={"ID":"07caac96-9977-4dd7-9220-4ba6d1934a4c","Type":"ContainerStarted","Data":"0d2bdbd28fd7f52baf513a44a68e7f2fa2ec8189f015926677019cb2e71c5a87"} Feb 17 17:54:53 crc kubenswrapper[4680]: I0217 17:54:53.199760 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-h4nrt" event={"ID":"98cd155c-93dd-44c8-b05b-b3375c0eb430","Type":"ContainerStarted","Data":"e0bff68ce5ff68fd45f80d1c52e6cc1cee6ed084c3e01215486f757a7407491e"} Feb 17 17:54:53 crc kubenswrapper[4680]: I0217 17:54:53.200306 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-h4nrt" Feb 17 17:54:53 crc kubenswrapper[4680]: I0217 17:54:53.202834 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-r5b7s" event={"ID":"0b2cbbee-856e-43b8-afa2-e3cbd86cf3cc","Type":"ContainerStarted","Data":"71c4a4ff1b38b61c43b0d117ad26e0b3f93886d4b36eb3890b7dfb282639a87e"} Feb 17 17:54:53 crc kubenswrapper[4680]: I0217 17:54:53.204592 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-6mmb4" event={"ID":"07caac96-9977-4dd7-9220-4ba6d1934a4c","Type":"ContainerStarted","Data":"26fea9bee37f9581e0dfb845891a5273cc8043f2dad83eb044c8e2469e2f128c"} Feb 17 17:54:53 crc kubenswrapper[4680]: I0217 17:54:53.218880 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-h4nrt" podStartSLOduration=1.7579247260000002 podStartE2EDuration="5.218848603s" podCreationTimestamp="2026-02-17 17:54:48 +0000 UTC" firstStartedPulling="2026-02-17 17:54:48.925842893 +0000 UTC m=+618.476741572" lastFinishedPulling="2026-02-17 17:54:52.38676677 +0000 UTC m=+621.937665449" observedRunningTime="2026-02-17 17:54:53.212522447 +0000 UTC m=+622.763421206" watchObservedRunningTime="2026-02-17 17:54:53.218848603 +0000 UTC m=+622.769747282" Feb 17 17:54:53 crc kubenswrapper[4680]: I0217 17:54:53.231537 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-r5b7s" podStartSLOduration=1.838378834 podStartE2EDuration="5.231518155s" podCreationTimestamp="2026-02-17 17:54:48 +0000 UTC" firstStartedPulling="2026-02-17 17:54:49.064259256 +0000 UTC m=+618.615157935" lastFinishedPulling="2026-02-17 17:54:52.457398577 +0000 UTC m=+622.008297256" observedRunningTime="2026-02-17 17:54:53.228796347 +0000 UTC m=+622.779695026" watchObservedRunningTime="2026-02-17 17:54:53.231518155 +0000 UTC m=+622.782416834" Feb 17 17:54:53 crc kubenswrapper[4680]: I0217 17:54:53.245750 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-6mmb4" podStartSLOduration=1.957307352 podStartE2EDuration="5.245726997s" podCreationTimestamp="2026-02-17 17:54:48 +0000 UTC" firstStartedPulling="2026-02-17 17:54:49.097437875 +0000 UTC m=+618.648336554" lastFinishedPulling="2026-02-17 17:54:52.38585752 +0000 UTC m=+621.936756199" observedRunningTime="2026-02-17 17:54:53.243196425 +0000 UTC m=+622.794095124" watchObservedRunningTime="2026-02-17 17:54:53.245726997 +0000 UTC m=+622.796625676" Feb 17 17:54:57 crc kubenswrapper[4680]: I0217 17:54:57.675127 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8vx4n"] Feb 17 17:54:57 crc kubenswrapper[4680]: I0217 17:54:57.675938 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovn-controller" containerID="cri-o://7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df" gracePeriod=30 Feb 17 17:54:57 crc kubenswrapper[4680]: I0217 17:54:57.675983 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="northd" containerID="cri-o://a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347" gracePeriod=30 Feb 17 17:54:57 crc kubenswrapper[4680]: I0217 17:54:57.676061 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="sbdb" containerID="cri-o://3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99" gracePeriod=30 Feb 17 17:54:57 crc kubenswrapper[4680]: I0217 17:54:57.676186 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="kube-rbac-proxy-node" containerID="cri-o://e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d" gracePeriod=30 Feb 17 17:54:57 crc kubenswrapper[4680]: I0217 17:54:57.676261 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a" gracePeriod=30 Feb 17 17:54:57 crc kubenswrapper[4680]: I0217 17:54:57.676331 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="nbdb" containerID="cri-o://8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73" gracePeriod=30 Feb 17 17:54:57 crc kubenswrapper[4680]: I0217 17:54:57.677541 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovn-acl-logging" containerID="cri-o://0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee" gracePeriod=30 Feb 17 17:54:57 crc kubenswrapper[4680]: I0217 17:54:57.750730 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovnkube-controller" containerID="cri-o://140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe" gracePeriod=30 Feb 17 17:54:57 crc kubenswrapper[4680]: E0217 17:54:57.773451 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2733b1c1_6f98_4184_b59b_123435eeb569.slice/crio-conmon-7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df.scope\": RecentStats: unable to find data in memory cache]" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.016434 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovnkube-controller/3.log" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.018168 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovn-acl-logging/0.log" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.018786 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovn-controller/0.log" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.019144 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.051788 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-ovnkube-script-lib\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.052591 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-run-ovn-kubernetes\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.052698 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-ovn\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.052526 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.052651 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.052783 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-openvswitch\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.052857 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.052813 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-cni-netd\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.052918 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-systemd-units\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.052908 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.052973 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.052979 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053000 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-cni-bin\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053024 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-log-socket\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053071 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053104 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-etc-openvswitch\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053169 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-log-socket" (OuterVolumeSpecName: "log-socket") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053174 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053202 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-var-lib-openvswitch\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053227 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053286 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-env-overrides\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053230 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053253 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053388 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-ovnkube-config\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053414 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-systemd\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053706 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053752 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-node-log\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053818 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2733b1c1-6f98-4184-b59b-123435eeb569-ovn-node-metrics-cert\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053819 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-node-log" (OuterVolumeSpecName: "node-log") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053843 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-run-netns\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053867 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plhbl\" (UniqueName: \"kubernetes.io/projected/2733b1c1-6f98-4184-b59b-123435eeb569-kube-api-access-plhbl\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053903 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-kubelet\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053935 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-slash\") pod \"2733b1c1-6f98-4184-b59b-123435eeb569\" (UID: \"2733b1c1-6f98-4184-b59b-123435eeb569\") " Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.053936 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.054295 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.054339 4680 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.054349 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.054357 4680 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.054371 4680 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.054386 4680 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.054399 4680 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.054409 4680 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.054421 4680 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.054431 4680 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-log-socket\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.054441 4680 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.054467 4680 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.054479 4680 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.054492 4680 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.054502 4680 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2733b1c1-6f98-4184-b59b-123435eeb569-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.054375 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-slash" (OuterVolumeSpecName: "host-slash") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.054512 4680 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-node-log\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.059093 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2733b1c1-6f98-4184-b59b-123435eeb569-kube-api-access-plhbl" (OuterVolumeSpecName: "kube-api-access-plhbl") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "kube-api-access-plhbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.060057 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2733b1c1-6f98-4184-b59b-123435eeb569-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.072391 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2733b1c1-6f98-4184-b59b-123435eeb569" (UID: "2733b1c1-6f98-4184-b59b-123435eeb569"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.072482 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xqcql"] Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.072715 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovn-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.072736 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovn-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.072749 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovnkube-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.072757 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovnkube-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.072766 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovnkube-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.072774 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovnkube-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.072785 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="northd" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.072792 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="northd" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.072806 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovn-acl-logging" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.072814 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovn-acl-logging" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.072826 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="nbdb" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.072834 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="nbdb" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.072860 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovnkube-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.072868 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovnkube-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.072879 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovnkube-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.072887 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovnkube-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.072899 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="sbdb" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.072907 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="sbdb" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.072915 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="kube-rbac-proxy-node" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.072922 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="kube-rbac-proxy-node" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.072933 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="kubecfg-setup" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.072940 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="kubecfg-setup" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.072951 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.072959 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.073077 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovn-acl-logging" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.073088 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="northd" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.073101 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovnkube-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.073112 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovn-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.073120 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="kube-rbac-proxy-node" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.073130 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="nbdb" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.073141 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="sbdb" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.073152 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovnkube-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.073162 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovnkube-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.073172 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovnkube-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.073181 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.073299 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovnkube-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.073309 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovnkube-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.073444 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" containerName="ovnkube-controller" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.075215 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.154950 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-ovn-node-metrics-cert\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155000 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-run-ovn-kubernetes\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155036 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-kubelet\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155058 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-etc-openvswitch\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155081 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxmsn\" (UniqueName: \"kubernetes.io/projected/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-kube-api-access-vxmsn\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155104 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-cni-netd\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155141 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-cni-bin\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155162 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-run-netns\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155185 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-env-overrides\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155213 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155239 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-log-socket\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155301 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-ovnkube-script-lib\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155341 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-systemd-units\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155367 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-node-log\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155392 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-var-lib-openvswitch\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155414 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-run-ovn\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155438 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-run-systemd\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155464 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-run-openvswitch\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155495 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-ovnkube-config\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155519 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-slash\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155853 4680 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155872 4680 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-slash\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155887 4680 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155900 4680 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2733b1c1-6f98-4184-b59b-123435eeb569-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155913 4680 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2733b1c1-6f98-4184-b59b-123435eeb569-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.155926 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plhbl\" (UniqueName: \"kubernetes.io/projected/2733b1c1-6f98-4184-b59b-123435eeb569-kube-api-access-plhbl\") on node \"crc\" DevicePath \"\"" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.228707 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t2c5v_c056df58-0160-4cf6-ae0c-dc7095c8be95/kube-multus/1.log" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.229304 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t2c5v_c056df58-0160-4cf6-ae0c-dc7095c8be95/kube-multus/0.log" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.229380 4680 generic.go:334] "Generic (PLEG): container finished" podID="c056df58-0160-4cf6-ae0c-dc7095c8be95" containerID="ff98cc275fec6841e32dc0655c3f31f7544792122c4a96cf4db0e51d52a4065d" exitCode=2 Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.229458 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t2c5v" event={"ID":"c056df58-0160-4cf6-ae0c-dc7095c8be95","Type":"ContainerDied","Data":"ff98cc275fec6841e32dc0655c3f31f7544792122c4a96cf4db0e51d52a4065d"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.229502 4680 scope.go:117] "RemoveContainer" containerID="7101a12e6edc51f950ea54caf867fc04c59470c07742e6b21c0a988ce837f50c" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.229949 4680 scope.go:117] "RemoveContainer" containerID="ff98cc275fec6841e32dc0655c3f31f7544792122c4a96cf4db0e51d52a4065d" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.230221 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-t2c5v_openshift-multus(c056df58-0160-4cf6-ae0c-dc7095c8be95)\"" pod="openshift-multus/multus-t2c5v" podUID="c056df58-0160-4cf6-ae0c-dc7095c8be95" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.233444 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovnkube-controller/3.log" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.235933 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovn-acl-logging/0.log" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.236533 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vx4n_2733b1c1-6f98-4184-b59b-123435eeb569/ovn-controller/0.log" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.236894 4680 generic.go:334] "Generic (PLEG): container finished" podID="2733b1c1-6f98-4184-b59b-123435eeb569" containerID="140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe" exitCode=0 Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.236923 4680 generic.go:334] "Generic (PLEG): container finished" podID="2733b1c1-6f98-4184-b59b-123435eeb569" containerID="3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99" exitCode=0 Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.236932 4680 generic.go:334] "Generic (PLEG): container finished" podID="2733b1c1-6f98-4184-b59b-123435eeb569" containerID="8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73" exitCode=0 Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.236940 4680 generic.go:334] "Generic (PLEG): container finished" podID="2733b1c1-6f98-4184-b59b-123435eeb569" containerID="a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347" exitCode=0 Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.236936 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerDied","Data":"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.236975 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerDied","Data":"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.236990 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerDied","Data":"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237001 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerDied","Data":"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237011 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerDied","Data":"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.236948 4680 generic.go:334] "Generic (PLEG): container finished" podID="2733b1c1-6f98-4184-b59b-123435eeb569" containerID="f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a" exitCode=0 Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237029 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237032 4680 generic.go:334] "Generic (PLEG): container finished" podID="2733b1c1-6f98-4184-b59b-123435eeb569" containerID="e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d" exitCode=0 Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237545 4680 generic.go:334] "Generic (PLEG): container finished" podID="2733b1c1-6f98-4184-b59b-123435eeb569" containerID="0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee" exitCode=143 Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237558 4680 generic.go:334] "Generic (PLEG): container finished" podID="2733b1c1-6f98-4184-b59b-123435eeb569" containerID="7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df" exitCode=143 Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237041 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerDied","Data":"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237595 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237609 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237620 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237627 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237635 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237643 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237649 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237663 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237671 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237677 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237695 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerDied","Data":"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237711 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237720 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237727 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237733 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237740 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237747 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237754 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237761 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237767 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237789 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237800 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerDied","Data":"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237811 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237819 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237825 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237831 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237838 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237844 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237851 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237857 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237864 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237871 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237882 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vx4n" event={"ID":"2733b1c1-6f98-4184-b59b-123435eeb569","Type":"ContainerDied","Data":"1c1a2c5afbc9ce2fa8901afc214aeec3824ac8359905609213a86b17ab50f391"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237893 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237902 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237911 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237918 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237925 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237933 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237940 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237947 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237955 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.237962 4680 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024"} Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257421 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-cni-bin\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257461 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-run-netns\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257476 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-env-overrides\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257500 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257516 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-log-socket\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257547 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-ovnkube-script-lib\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257563 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-node-log\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257585 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-systemd-units\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257602 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-var-lib-openvswitch\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257620 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-run-ovn\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257636 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-run-systemd\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257663 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-run-openvswitch\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257683 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-ovnkube-config\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257698 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-slash\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257712 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-ovn-node-metrics-cert\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257727 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-run-ovn-kubernetes\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257747 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-etc-openvswitch\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257761 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-kubelet\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257778 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxmsn\" (UniqueName: \"kubernetes.io/projected/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-kube-api-access-vxmsn\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257794 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-cni-netd\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257860 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-cni-netd\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257891 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-cni-bin\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.257911 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-run-netns\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.258474 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-env-overrides\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.258522 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.258561 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-log-socket\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.259513 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-run-openvswitch\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.259605 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-node-log\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.259667 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-ovnkube-script-lib\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.259925 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-systemd-units\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.259966 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-var-lib-openvswitch\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.259999 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-run-ovn\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.260031 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-run-systemd\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.260163 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-ovnkube-config\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.260204 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-slash\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.260243 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-etc-openvswitch\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.260280 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-run-ovn-kubernetes\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.260330 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-host-kubelet\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.264727 4680 scope.go:117] "RemoveContainer" containerID="140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.265551 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-ovn-node-metrics-cert\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.277752 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8vx4n"] Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.281242 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8vx4n"] Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.282069 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxmsn\" (UniqueName: \"kubernetes.io/projected/0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8-kube-api-access-vxmsn\") pod \"ovnkube-node-xqcql\" (UID: \"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.289501 4680 scope.go:117] "RemoveContainer" containerID="c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.305469 4680 scope.go:117] "RemoveContainer" containerID="3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.319023 4680 scope.go:117] "RemoveContainer" containerID="8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.332136 4680 scope.go:117] "RemoveContainer" containerID="a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.345264 4680 scope.go:117] "RemoveContainer" containerID="f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.358331 4680 scope.go:117] "RemoveContainer" containerID="e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.372397 4680 scope.go:117] "RemoveContainer" containerID="0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.384812 4680 scope.go:117] "RemoveContainer" containerID="7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.391564 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.395512 4680 scope.go:117] "RemoveContainer" containerID="731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.409737 4680 scope.go:117] "RemoveContainer" containerID="140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.410296 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe\": container with ID starting with 140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe not found: ID does not exist" containerID="140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.410351 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe"} err="failed to get container status \"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe\": rpc error: code = NotFound desc = could not find container \"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe\": container with ID starting with 140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.410459 4680 scope.go:117] "RemoveContainer" containerID="c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.410765 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\": container with ID starting with c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af not found: ID does not exist" containerID="c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.410819 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af"} err="failed to get container status \"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\": rpc error: code = NotFound desc = could not find container \"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\": container with ID starting with c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.410852 4680 scope.go:117] "RemoveContainer" containerID="3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.411193 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\": container with ID starting with 3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99 not found: ID does not exist" containerID="3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.411222 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99"} err="failed to get container status \"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\": rpc error: code = NotFound desc = could not find container \"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\": container with ID starting with 3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99 not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.411238 4680 scope.go:117] "RemoveContainer" containerID="8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.411641 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\": container with ID starting with 8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73 not found: ID does not exist" containerID="8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.411681 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73"} err="failed to get container status \"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\": rpc error: code = NotFound desc = could not find container \"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\": container with ID starting with 8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73 not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.411696 4680 scope.go:117] "RemoveContainer" containerID="a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.411961 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\": container with ID starting with a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347 not found: ID does not exist" containerID="a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.411994 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347"} err="failed to get container status \"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\": rpc error: code = NotFound desc = could not find container \"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\": container with ID starting with a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347 not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.412014 4680 scope.go:117] "RemoveContainer" containerID="f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.412254 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\": container with ID starting with f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a not found: ID does not exist" containerID="f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.412356 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a"} err="failed to get container status \"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\": rpc error: code = NotFound desc = could not find container \"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\": container with ID starting with f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.412373 4680 scope.go:117] "RemoveContainer" containerID="e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.412916 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\": container with ID starting with e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d not found: ID does not exist" containerID="e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.412949 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d"} err="failed to get container status \"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\": rpc error: code = NotFound desc = could not find container \"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\": container with ID starting with e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.412967 4680 scope.go:117] "RemoveContainer" containerID="0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.413184 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\": container with ID starting with 0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee not found: ID does not exist" containerID="0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.413213 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee"} err="failed to get container status \"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\": rpc error: code = NotFound desc = could not find container \"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\": container with ID starting with 0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.413230 4680 scope.go:117] "RemoveContainer" containerID="7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.413486 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\": container with ID starting with 7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df not found: ID does not exist" containerID="7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.413517 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df"} err="failed to get container status \"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\": rpc error: code = NotFound desc = could not find container \"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\": container with ID starting with 7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.413535 4680 scope.go:117] "RemoveContainer" containerID="731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024" Feb 17 17:54:58 crc kubenswrapper[4680]: E0217 17:54:58.413775 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\": container with ID starting with 731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024 not found: ID does not exist" containerID="731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.413796 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024"} err="failed to get container status \"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\": rpc error: code = NotFound desc = could not find container \"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\": container with ID starting with 731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024 not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.413809 4680 scope.go:117] "RemoveContainer" containerID="140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.414027 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe"} err="failed to get container status \"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe\": rpc error: code = NotFound desc = could not find container \"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe\": container with ID starting with 140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.414048 4680 scope.go:117] "RemoveContainer" containerID="c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.414265 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af"} err="failed to get container status \"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\": rpc error: code = NotFound desc = could not find container \"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\": container with ID starting with c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.414281 4680 scope.go:117] "RemoveContainer" containerID="3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.414527 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99"} err="failed to get container status \"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\": rpc error: code = NotFound desc = could not find container \"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\": container with ID starting with 3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99 not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.414552 4680 scope.go:117] "RemoveContainer" containerID="8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.414811 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73"} err="failed to get container status \"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\": rpc error: code = NotFound desc = could not find container \"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\": container with ID starting with 8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73 not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.414838 4680 scope.go:117] "RemoveContainer" containerID="a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.415051 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347"} err="failed to get container status \"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\": rpc error: code = NotFound desc = could not find container \"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\": container with ID starting with a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347 not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.415087 4680 scope.go:117] "RemoveContainer" containerID="f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.415288 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a"} err="failed to get container status \"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\": rpc error: code = NotFound desc = could not find container \"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\": container with ID starting with f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.415405 4680 scope.go:117] "RemoveContainer" containerID="e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.415637 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d"} err="failed to get container status \"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\": rpc error: code = NotFound desc = could not find container \"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\": container with ID starting with e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.415665 4680 scope.go:117] "RemoveContainer" containerID="0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.415909 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee"} err="failed to get container status \"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\": rpc error: code = NotFound desc = could not find container \"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\": container with ID starting with 0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.415945 4680 scope.go:117] "RemoveContainer" containerID="7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.416132 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df"} err="failed to get container status \"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\": rpc error: code = NotFound desc = could not find container \"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\": container with ID starting with 7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.416153 4680 scope.go:117] "RemoveContainer" containerID="731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.416371 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024"} err="failed to get container status \"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\": rpc error: code = NotFound desc = could not find container \"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\": container with ID starting with 731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024 not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.416394 4680 scope.go:117] "RemoveContainer" containerID="140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.416585 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe"} err="failed to get container status \"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe\": rpc error: code = NotFound desc = could not find container \"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe\": container with ID starting with 140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.416609 4680 scope.go:117] "RemoveContainer" containerID="c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.416826 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af"} err="failed to get container status \"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\": rpc error: code = NotFound desc = could not find container \"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\": container with ID starting with c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.416847 4680 scope.go:117] "RemoveContainer" containerID="3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.417043 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99"} err="failed to get container status \"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\": rpc error: code = NotFound desc = could not find container \"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\": container with ID starting with 3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99 not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.417065 4680 scope.go:117] "RemoveContainer" containerID="8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.417270 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73"} err="failed to get container status \"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\": rpc error: code = NotFound desc = could not find container \"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\": container with ID starting with 8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73 not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.417293 4680 scope.go:117] "RemoveContainer" containerID="a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.417476 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347"} err="failed to get container status \"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\": rpc error: code = NotFound desc = could not find container \"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\": container with ID starting with a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347 not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.417497 4680 scope.go:117] "RemoveContainer" containerID="f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.417787 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a"} err="failed to get container status \"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\": rpc error: code = NotFound desc = could not find container \"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\": container with ID starting with f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.417822 4680 scope.go:117] "RemoveContainer" containerID="e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.418119 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d"} err="failed to get container status \"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\": rpc error: code = NotFound desc = could not find container \"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\": container with ID starting with e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.418146 4680 scope.go:117] "RemoveContainer" containerID="0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.418389 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee"} err="failed to get container status \"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\": rpc error: code = NotFound desc = could not find container \"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\": container with ID starting with 0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.418416 4680 scope.go:117] "RemoveContainer" containerID="7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.418705 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df"} err="failed to get container status \"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\": rpc error: code = NotFound desc = could not find container \"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\": container with ID starting with 7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.418731 4680 scope.go:117] "RemoveContainer" containerID="731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.419025 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024"} err="failed to get container status \"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\": rpc error: code = NotFound desc = could not find container \"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\": container with ID starting with 731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024 not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.419048 4680 scope.go:117] "RemoveContainer" containerID="140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.419245 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe"} err="failed to get container status \"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe\": rpc error: code = NotFound desc = could not find container \"140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe\": container with ID starting with 140e19da03fcd14c04c3e171d2d660b792afdbba8afb628c3cf4937fed74b4fe not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.419267 4680 scope.go:117] "RemoveContainer" containerID="c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.419526 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af"} err="failed to get container status \"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\": rpc error: code = NotFound desc = could not find container \"c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af\": container with ID starting with c674c284bbd55b50db2119801a9c9903c631a6a99a7275d2d07496dda24a54af not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.419553 4680 scope.go:117] "RemoveContainer" containerID="3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.419955 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99"} err="failed to get container status \"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\": rpc error: code = NotFound desc = could not find container \"3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99\": container with ID starting with 3ee85c82da9d0a23d220095f4d3b3cb7e5c5991f8cd9a3319eb2ac5b3f075d99 not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.419982 4680 scope.go:117] "RemoveContainer" containerID="8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.420228 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73"} err="failed to get container status \"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\": rpc error: code = NotFound desc = could not find container \"8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73\": container with ID starting with 8dd586d12eef4cfa387716f97f6db4338587898c59754ebbe46c4de3cfcb2e73 not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.420254 4680 scope.go:117] "RemoveContainer" containerID="a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.420538 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347"} err="failed to get container status \"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\": rpc error: code = NotFound desc = could not find container \"a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347\": container with ID starting with a60ea95d49b4ce4a7167348936efd51a52a5c65cbca381632fb0baaaa8518347 not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.420587 4680 scope.go:117] "RemoveContainer" containerID="f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a" Feb 17 17:54:58 crc kubenswrapper[4680]: W0217 17:54:58.420661 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fbf4a86_6049_4a56_bd46_4e8e9a7a46f8.slice/crio-f26ee1da230bd4fdc22189230a8ee20ec9b18f62d6c8d7337754d04676227dbc WatchSource:0}: Error finding container f26ee1da230bd4fdc22189230a8ee20ec9b18f62d6c8d7337754d04676227dbc: Status 404 returned error can't find the container with id f26ee1da230bd4fdc22189230a8ee20ec9b18f62d6c8d7337754d04676227dbc Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.420854 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a"} err="failed to get container status \"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\": rpc error: code = NotFound desc = could not find container \"f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a\": container with ID starting with f2fae58e0e73f07f2c9119259a351101505983b2a18304eb340778e12cbcc44a not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.420884 4680 scope.go:117] "RemoveContainer" containerID="e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.421067 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d"} err="failed to get container status \"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\": rpc error: code = NotFound desc = could not find container \"e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d\": container with ID starting with e73213d4df93788a730d865c35df86b9e717d2df1041f00d757de5c6cc5a5f6d not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.421089 4680 scope.go:117] "RemoveContainer" containerID="0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.421261 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee"} err="failed to get container status \"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\": rpc error: code = NotFound desc = could not find container \"0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee\": container with ID starting with 0be9c61b0c008ba07eed20e2f0c4f9bf043b546b1af3b3f17afb703f3b9cd7ee not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.421284 4680 scope.go:117] "RemoveContainer" containerID="7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.421541 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df"} err="failed to get container status \"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\": rpc error: code = NotFound desc = could not find container \"7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df\": container with ID starting with 7c00e0872439df33e22760d2cd0a938225841734172bd4f2b5658c2d99f102df not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.421562 4680 scope.go:117] "RemoveContainer" containerID="731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.421738 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024"} err="failed to get container status \"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\": rpc error: code = NotFound desc = could not find container \"731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024\": container with ID starting with 731a6ba7cca8a43436a2112aa5bb4c7be918f974bd867e46e7aa0f6e9a03d024 not found: ID does not exist" Feb 17 17:54:58 crc kubenswrapper[4680]: I0217 17:54:58.668076 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-h4nrt" Feb 17 17:54:59 crc kubenswrapper[4680]: I0217 17:54:59.112356 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2733b1c1-6f98-4184-b59b-123435eeb569" path="/var/lib/kubelet/pods/2733b1c1-6f98-4184-b59b-123435eeb569/volumes" Feb 17 17:54:59 crc kubenswrapper[4680]: I0217 17:54:59.244229 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t2c5v_c056df58-0160-4cf6-ae0c-dc7095c8be95/kube-multus/1.log" Feb 17 17:54:59 crc kubenswrapper[4680]: I0217 17:54:59.245701 4680 generic.go:334] "Generic (PLEG): container finished" podID="0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8" containerID="68eda7f16d178fee0105200c32885e793379852c89110f13769e6d1d8d49f2f0" exitCode=0 Feb 17 17:54:59 crc kubenswrapper[4680]: I0217 17:54:59.245739 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" event={"ID":"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8","Type":"ContainerDied","Data":"68eda7f16d178fee0105200c32885e793379852c89110f13769e6d1d8d49f2f0"} Feb 17 17:54:59 crc kubenswrapper[4680]: I0217 17:54:59.245766 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" event={"ID":"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8","Type":"ContainerStarted","Data":"f26ee1da230bd4fdc22189230a8ee20ec9b18f62d6c8d7337754d04676227dbc"} Feb 17 17:55:00 crc kubenswrapper[4680]: I0217 17:55:00.254956 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" event={"ID":"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8","Type":"ContainerStarted","Data":"8da1c00abdea8de786c6b6f90def2455727790b9a0bd11ea215f37afc65fedaf"} Feb 17 17:55:00 crc kubenswrapper[4680]: I0217 17:55:00.255289 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" event={"ID":"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8","Type":"ContainerStarted","Data":"cd2f65397279b8f1bcffc9b912b83ec10e61fe7f375a98f57077e05b4c3aa8d1"} Feb 17 17:55:00 crc kubenswrapper[4680]: I0217 17:55:00.255306 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" event={"ID":"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8","Type":"ContainerStarted","Data":"be3b60d9e6fd963a216899cd415122da6ea81d27c9b429b1e6c66c6573d35118"} Feb 17 17:55:00 crc kubenswrapper[4680]: I0217 17:55:00.255338 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" event={"ID":"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8","Type":"ContainerStarted","Data":"a790d85d18042c6741ce5037aa9cd6c519d35168dd5ab0ed9986319a754242f7"} Feb 17 17:55:00 crc kubenswrapper[4680]: I0217 17:55:00.255353 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" event={"ID":"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8","Type":"ContainerStarted","Data":"6363eb8a80e60c2d70184e787430040e5a4427063172f6e164f0cff477935a9f"} Feb 17 17:55:00 crc kubenswrapper[4680]: I0217 17:55:00.255365 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" event={"ID":"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8","Type":"ContainerStarted","Data":"07a6a1498e3e9bd6a33229ff93d1b984f85883572d069b86b093145a5242ec30"} Feb 17 17:55:02 crc kubenswrapper[4680]: I0217 17:55:02.267249 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" event={"ID":"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8","Type":"ContainerStarted","Data":"3cb64e8fa91e36b56c51fc8d2301106b5b365ae44d1de57174ec4db824958a4c"} Feb 17 17:55:04 crc kubenswrapper[4680]: I0217 17:55:04.286442 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" event={"ID":"0fbf4a86-6049-4a56-bd46-4e8e9a7a46f8","Type":"ContainerStarted","Data":"b7f06f2e5d3f93bd7933d177e054119e03c301ad4220ccab96e6cb67c40e5680"} Feb 17 17:55:04 crc kubenswrapper[4680]: I0217 17:55:04.286741 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:55:04 crc kubenswrapper[4680]: I0217 17:55:04.286752 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:55:04 crc kubenswrapper[4680]: I0217 17:55:04.286761 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:55:04 crc kubenswrapper[4680]: I0217 17:55:04.312125 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:55:04 crc kubenswrapper[4680]: I0217 17:55:04.317161 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:55:04 crc kubenswrapper[4680]: I0217 17:55:04.325874 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" podStartSLOduration=6.325858227 podStartE2EDuration="6.325858227s" podCreationTimestamp="2026-02-17 17:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:55:04.325753884 +0000 UTC m=+633.876652583" watchObservedRunningTime="2026-02-17 17:55:04.325858227 +0000 UTC m=+633.876756906" Feb 17 17:55:11 crc kubenswrapper[4680]: I0217 17:55:11.108519 4680 scope.go:117] "RemoveContainer" containerID="ff98cc275fec6841e32dc0655c3f31f7544792122c4a96cf4db0e51d52a4065d" Feb 17 17:55:11 crc kubenswrapper[4680]: I0217 17:55:11.325345 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t2c5v_c056df58-0160-4cf6-ae0c-dc7095c8be95/kube-multus/1.log" Feb 17 17:55:11 crc kubenswrapper[4680]: I0217 17:55:11.325414 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t2c5v" event={"ID":"c056df58-0160-4cf6-ae0c-dc7095c8be95","Type":"ContainerStarted","Data":"ffa27d522aff3798ec93a2aa6984d160ab503e0cd654cc2f22d3636fe907bcef"} Feb 17 17:55:28 crc kubenswrapper[4680]: I0217 17:55:28.412863 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xqcql" Feb 17 17:55:39 crc kubenswrapper[4680]: I0217 17:55:39.235513 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j"] Feb 17 17:55:39 crc kubenswrapper[4680]: I0217 17:55:39.237457 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" Feb 17 17:55:39 crc kubenswrapper[4680]: I0217 17:55:39.239499 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 17:55:39 crc kubenswrapper[4680]: I0217 17:55:39.264591 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j"] Feb 17 17:55:39 crc kubenswrapper[4680]: I0217 17:55:39.367762 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41e38977-61de-46c0-8a84-9cf4de5d5156-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j\" (UID: \"41e38977-61de-46c0-8a84-9cf4de5d5156\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" Feb 17 17:55:39 crc kubenswrapper[4680]: I0217 17:55:39.367825 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41e38977-61de-46c0-8a84-9cf4de5d5156-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j\" (UID: \"41e38977-61de-46c0-8a84-9cf4de5d5156\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" Feb 17 17:55:39 crc kubenswrapper[4680]: I0217 17:55:39.367847 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgjfj\" (UniqueName: \"kubernetes.io/projected/41e38977-61de-46c0-8a84-9cf4de5d5156-kube-api-access-qgjfj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j\" (UID: \"41e38977-61de-46c0-8a84-9cf4de5d5156\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" Feb 17 17:55:39 crc kubenswrapper[4680]: I0217 17:55:39.468645 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41e38977-61de-46c0-8a84-9cf4de5d5156-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j\" (UID: \"41e38977-61de-46c0-8a84-9cf4de5d5156\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" Feb 17 17:55:39 crc kubenswrapper[4680]: I0217 17:55:39.469018 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41e38977-61de-46c0-8a84-9cf4de5d5156-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j\" (UID: \"41e38977-61de-46c0-8a84-9cf4de5d5156\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" Feb 17 17:55:39 crc kubenswrapper[4680]: I0217 17:55:39.469114 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgjfj\" (UniqueName: \"kubernetes.io/projected/41e38977-61de-46c0-8a84-9cf4de5d5156-kube-api-access-qgjfj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j\" (UID: \"41e38977-61de-46c0-8a84-9cf4de5d5156\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" Feb 17 17:55:39 crc kubenswrapper[4680]: I0217 17:55:39.469193 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41e38977-61de-46c0-8a84-9cf4de5d5156-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j\" (UID: \"41e38977-61de-46c0-8a84-9cf4de5d5156\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" Feb 17 17:55:39 crc kubenswrapper[4680]: I0217 17:55:39.469458 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41e38977-61de-46c0-8a84-9cf4de5d5156-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j\" (UID: \"41e38977-61de-46c0-8a84-9cf4de5d5156\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" Feb 17 17:55:39 crc kubenswrapper[4680]: I0217 17:55:39.487229 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgjfj\" (UniqueName: \"kubernetes.io/projected/41e38977-61de-46c0-8a84-9cf4de5d5156-kube-api-access-qgjfj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j\" (UID: \"41e38977-61de-46c0-8a84-9cf4de5d5156\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" Feb 17 17:55:39 crc kubenswrapper[4680]: I0217 17:55:39.566966 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" Feb 17 17:55:39 crc kubenswrapper[4680]: I0217 17:55:39.734406 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j"] Feb 17 17:55:40 crc kubenswrapper[4680]: I0217 17:55:40.560372 4680 generic.go:334] "Generic (PLEG): container finished" podID="41e38977-61de-46c0-8a84-9cf4de5d5156" containerID="6c859c98910101b1099ecac0fdb3eed7c324da3c7b9d233d90fa32f866c879e3" exitCode=0 Feb 17 17:55:40 crc kubenswrapper[4680]: I0217 17:55:40.560669 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" event={"ID":"41e38977-61de-46c0-8a84-9cf4de5d5156","Type":"ContainerDied","Data":"6c859c98910101b1099ecac0fdb3eed7c324da3c7b9d233d90fa32f866c879e3"} Feb 17 17:55:40 crc kubenswrapper[4680]: I0217 17:55:40.560699 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" event={"ID":"41e38977-61de-46c0-8a84-9cf4de5d5156","Type":"ContainerStarted","Data":"fc643eab605dcf18b98a379eeb2129293b88b6b60066c2e8e6c72bbd646c89b4"} Feb 17 17:55:41 crc kubenswrapper[4680]: I0217 17:55:41.567747 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" event={"ID":"41e38977-61de-46c0-8a84-9cf4de5d5156","Type":"ContainerStarted","Data":"4665d0c3437bd95027c8e583cf9081713b404aef35d8dabb7396bd945228d56d"} Feb 17 17:55:42 crc kubenswrapper[4680]: I0217 17:55:42.574459 4680 generic.go:334] "Generic (PLEG): container finished" podID="41e38977-61de-46c0-8a84-9cf4de5d5156" containerID="4665d0c3437bd95027c8e583cf9081713b404aef35d8dabb7396bd945228d56d" exitCode=0 Feb 17 17:55:42 crc kubenswrapper[4680]: I0217 17:55:42.574528 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" event={"ID":"41e38977-61de-46c0-8a84-9cf4de5d5156","Type":"ContainerDied","Data":"4665d0c3437bd95027c8e583cf9081713b404aef35d8dabb7396bd945228d56d"} Feb 17 17:55:43 crc kubenswrapper[4680]: I0217 17:55:43.581885 4680 generic.go:334] "Generic (PLEG): container finished" podID="41e38977-61de-46c0-8a84-9cf4de5d5156" containerID="4b7ef9d5544419fcd608e65bbb2fc3688e6568fb338ddbba96aee8bd92baa531" exitCode=0 Feb 17 17:55:43 crc kubenswrapper[4680]: I0217 17:55:43.581943 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" event={"ID":"41e38977-61de-46c0-8a84-9cf4de5d5156","Type":"ContainerDied","Data":"4b7ef9d5544419fcd608e65bbb2fc3688e6568fb338ddbba96aee8bd92baa531"} Feb 17 17:55:44 crc kubenswrapper[4680]: I0217 17:55:44.774897 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" Feb 17 17:55:44 crc kubenswrapper[4680]: I0217 17:55:44.935312 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41e38977-61de-46c0-8a84-9cf4de5d5156-bundle\") pod \"41e38977-61de-46c0-8a84-9cf4de5d5156\" (UID: \"41e38977-61de-46c0-8a84-9cf4de5d5156\") " Feb 17 17:55:44 crc kubenswrapper[4680]: I0217 17:55:44.935425 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgjfj\" (UniqueName: \"kubernetes.io/projected/41e38977-61de-46c0-8a84-9cf4de5d5156-kube-api-access-qgjfj\") pod \"41e38977-61de-46c0-8a84-9cf4de5d5156\" (UID: \"41e38977-61de-46c0-8a84-9cf4de5d5156\") " Feb 17 17:55:44 crc kubenswrapper[4680]: I0217 17:55:44.935459 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41e38977-61de-46c0-8a84-9cf4de5d5156-util\") pod \"41e38977-61de-46c0-8a84-9cf4de5d5156\" (UID: \"41e38977-61de-46c0-8a84-9cf4de5d5156\") " Feb 17 17:55:44 crc kubenswrapper[4680]: I0217 17:55:44.936118 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41e38977-61de-46c0-8a84-9cf4de5d5156-bundle" (OuterVolumeSpecName: "bundle") pod "41e38977-61de-46c0-8a84-9cf4de5d5156" (UID: "41e38977-61de-46c0-8a84-9cf4de5d5156"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:55:44 crc kubenswrapper[4680]: I0217 17:55:44.944675 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41e38977-61de-46c0-8a84-9cf4de5d5156-kube-api-access-qgjfj" (OuterVolumeSpecName: "kube-api-access-qgjfj") pod "41e38977-61de-46c0-8a84-9cf4de5d5156" (UID: "41e38977-61de-46c0-8a84-9cf4de5d5156"). InnerVolumeSpecName "kube-api-access-qgjfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:55:44 crc kubenswrapper[4680]: I0217 17:55:44.950213 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41e38977-61de-46c0-8a84-9cf4de5d5156-util" (OuterVolumeSpecName: "util") pod "41e38977-61de-46c0-8a84-9cf4de5d5156" (UID: "41e38977-61de-46c0-8a84-9cf4de5d5156"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:55:45 crc kubenswrapper[4680]: I0217 17:55:45.037212 4680 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41e38977-61de-46c0-8a84-9cf4de5d5156-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:55:45 crc kubenswrapper[4680]: I0217 17:55:45.037265 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgjfj\" (UniqueName: \"kubernetes.io/projected/41e38977-61de-46c0-8a84-9cf4de5d5156-kube-api-access-qgjfj\") on node \"crc\" DevicePath \"\"" Feb 17 17:55:45 crc kubenswrapper[4680]: I0217 17:55:45.037277 4680 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41e38977-61de-46c0-8a84-9cf4de5d5156-util\") on node \"crc\" DevicePath \"\"" Feb 17 17:55:45 crc kubenswrapper[4680]: I0217 17:55:45.592491 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" event={"ID":"41e38977-61de-46c0-8a84-9cf4de5d5156","Type":"ContainerDied","Data":"fc643eab605dcf18b98a379eeb2129293b88b6b60066c2e8e6c72bbd646c89b4"} Feb 17 17:55:45 crc kubenswrapper[4680]: I0217 17:55:45.592528 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc643eab605dcf18b98a379eeb2129293b88b6b60066c2e8e6c72bbd646c89b4" Feb 17 17:55:45 crc kubenswrapper[4680]: I0217 17:55:45.592593 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j" Feb 17 17:55:47 crc kubenswrapper[4680]: I0217 17:55:47.771241 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-nhzd8"] Feb 17 17:55:47 crc kubenswrapper[4680]: E0217 17:55:47.772085 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41e38977-61de-46c0-8a84-9cf4de5d5156" containerName="util" Feb 17 17:55:47 crc kubenswrapper[4680]: I0217 17:55:47.772102 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="41e38977-61de-46c0-8a84-9cf4de5d5156" containerName="util" Feb 17 17:55:47 crc kubenswrapper[4680]: E0217 17:55:47.772138 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41e38977-61de-46c0-8a84-9cf4de5d5156" containerName="extract" Feb 17 17:55:47 crc kubenswrapper[4680]: I0217 17:55:47.772148 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="41e38977-61de-46c0-8a84-9cf4de5d5156" containerName="extract" Feb 17 17:55:47 crc kubenswrapper[4680]: E0217 17:55:47.772169 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41e38977-61de-46c0-8a84-9cf4de5d5156" containerName="pull" Feb 17 17:55:47 crc kubenswrapper[4680]: I0217 17:55:47.772177 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="41e38977-61de-46c0-8a84-9cf4de5d5156" containerName="pull" Feb 17 17:55:47 crc kubenswrapper[4680]: I0217 17:55:47.772459 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="41e38977-61de-46c0-8a84-9cf4de5d5156" containerName="extract" Feb 17 17:55:47 crc kubenswrapper[4680]: I0217 17:55:47.773140 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-nhzd8" Feb 17 17:55:47 crc kubenswrapper[4680]: I0217 17:55:47.780740 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-nvls9" Feb 17 17:55:47 crc kubenswrapper[4680]: I0217 17:55:47.780941 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 17 17:55:47 crc kubenswrapper[4680]: I0217 17:55:47.782621 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5l7r\" (UniqueName: \"kubernetes.io/projected/5b484a76-ef98-43ef-9c15-0d2f86ce23a6-kube-api-access-j5l7r\") pod \"nmstate-operator-694c9596b7-nhzd8\" (UID: \"5b484a76-ef98-43ef-9c15-0d2f86ce23a6\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-nhzd8" Feb 17 17:55:47 crc kubenswrapper[4680]: I0217 17:55:47.787297 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 17 17:55:47 crc kubenswrapper[4680]: I0217 17:55:47.801227 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-nhzd8"] Feb 17 17:55:47 crc kubenswrapper[4680]: I0217 17:55:47.883367 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5l7r\" (UniqueName: \"kubernetes.io/projected/5b484a76-ef98-43ef-9c15-0d2f86ce23a6-kube-api-access-j5l7r\") pod \"nmstate-operator-694c9596b7-nhzd8\" (UID: \"5b484a76-ef98-43ef-9c15-0d2f86ce23a6\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-nhzd8" Feb 17 17:55:47 crc kubenswrapper[4680]: I0217 17:55:47.913603 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5l7r\" (UniqueName: \"kubernetes.io/projected/5b484a76-ef98-43ef-9c15-0d2f86ce23a6-kube-api-access-j5l7r\") pod \"nmstate-operator-694c9596b7-nhzd8\" (UID: \"5b484a76-ef98-43ef-9c15-0d2f86ce23a6\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-nhzd8" Feb 17 17:55:48 crc kubenswrapper[4680]: I0217 17:55:48.092997 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-nhzd8" Feb 17 17:55:48 crc kubenswrapper[4680]: I0217 17:55:48.322244 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-nhzd8"] Feb 17 17:55:48 crc kubenswrapper[4680]: I0217 17:55:48.607866 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-nhzd8" event={"ID":"5b484a76-ef98-43ef-9c15-0d2f86ce23a6","Type":"ContainerStarted","Data":"3367265e10a5c7d9a543643be355c4f35545728ad0c74004552b4a28e1523bb8"} Feb 17 17:55:52 crc kubenswrapper[4680]: I0217 17:55:52.627754 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-nhzd8" event={"ID":"5b484a76-ef98-43ef-9c15-0d2f86ce23a6","Type":"ContainerStarted","Data":"24ba4ca0e0af5e88332d58e3032b41c956e3d067128b7af1a6554474f4b489d8"} Feb 17 17:55:52 crc kubenswrapper[4680]: I0217 17:55:52.648113 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-nhzd8" podStartSLOduration=1.617163589 podStartE2EDuration="5.648089787s" podCreationTimestamp="2026-02-17 17:55:47 +0000 UTC" firstStartedPulling="2026-02-17 17:55:48.331166137 +0000 UTC m=+677.882064816" lastFinishedPulling="2026-02-17 17:55:52.362092335 +0000 UTC m=+681.912991014" observedRunningTime="2026-02-17 17:55:52.641517029 +0000 UTC m=+682.192415708" watchObservedRunningTime="2026-02-17 17:55:52.648089787 +0000 UTC m=+682.198988466" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.611615 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-88gd2"] Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.612427 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-88gd2" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.617391 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-w76l7" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.624508 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-hffc9"] Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.625407 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hffc9" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.631370 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.639842 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-88gd2"] Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.650850 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-hffc9"] Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.669556 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/35eb3c36-116b-4a21-9d5f-0a4c28011eb4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-hffc9\" (UID: \"35eb3c36-116b-4a21-9d5f-0a4c28011eb4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hffc9" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.669674 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jk2k\" (UniqueName: \"kubernetes.io/projected/2e5d7226-10e8-4b3f-b5ef-3d976c1f8462-kube-api-access-4jk2k\") pod \"nmstate-metrics-58c85c668d-88gd2\" (UID: \"2e5d7226-10e8-4b3f-b5ef-3d976c1f8462\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-88gd2" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.669718 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk8vv\" (UniqueName: \"kubernetes.io/projected/35eb3c36-116b-4a21-9d5f-0a4c28011eb4-kube-api-access-zk8vv\") pod \"nmstate-webhook-866bcb46dc-hffc9\" (UID: \"35eb3c36-116b-4a21-9d5f-0a4c28011eb4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hffc9" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.673570 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-nh5lq"] Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.674273 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-nh5lq" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.770254 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jk2k\" (UniqueName: \"kubernetes.io/projected/2e5d7226-10e8-4b3f-b5ef-3d976c1f8462-kube-api-access-4jk2k\") pod \"nmstate-metrics-58c85c668d-88gd2\" (UID: \"2e5d7226-10e8-4b3f-b5ef-3d976c1f8462\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-88gd2" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.770292 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk8vv\" (UniqueName: \"kubernetes.io/projected/35eb3c36-116b-4a21-9d5f-0a4c28011eb4-kube-api-access-zk8vv\") pod \"nmstate-webhook-866bcb46dc-hffc9\" (UID: \"35eb3c36-116b-4a21-9d5f-0a4c28011eb4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hffc9" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.770324 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/853da1fb-12a1-4f09-969f-9e8e58ebafc1-nmstate-lock\") pod \"nmstate-handler-nh5lq\" (UID: \"853da1fb-12a1-4f09-969f-9e8e58ebafc1\") " pod="openshift-nmstate/nmstate-handler-nh5lq" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.770366 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/35eb3c36-116b-4a21-9d5f-0a4c28011eb4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-hffc9\" (UID: \"35eb3c36-116b-4a21-9d5f-0a4c28011eb4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hffc9" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.770390 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/853da1fb-12a1-4f09-969f-9e8e58ebafc1-ovs-socket\") pod \"nmstate-handler-nh5lq\" (UID: \"853da1fb-12a1-4f09-969f-9e8e58ebafc1\") " pod="openshift-nmstate/nmstate-handler-nh5lq" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.770432 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/853da1fb-12a1-4f09-969f-9e8e58ebafc1-dbus-socket\") pod \"nmstate-handler-nh5lq\" (UID: \"853da1fb-12a1-4f09-969f-9e8e58ebafc1\") " pod="openshift-nmstate/nmstate-handler-nh5lq" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.770493 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9r2h\" (UniqueName: \"kubernetes.io/projected/853da1fb-12a1-4f09-969f-9e8e58ebafc1-kube-api-access-d9r2h\") pod \"nmstate-handler-nh5lq\" (UID: \"853da1fb-12a1-4f09-969f-9e8e58ebafc1\") " pod="openshift-nmstate/nmstate-handler-nh5lq" Feb 17 17:55:53 crc kubenswrapper[4680]: E0217 17:55:53.770558 4680 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 17 17:55:53 crc kubenswrapper[4680]: E0217 17:55:53.770673 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35eb3c36-116b-4a21-9d5f-0a4c28011eb4-tls-key-pair podName:35eb3c36-116b-4a21-9d5f-0a4c28011eb4 nodeName:}" failed. No retries permitted until 2026-02-17 17:55:54.270613497 +0000 UTC m=+683.821512176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/35eb3c36-116b-4a21-9d5f-0a4c28011eb4-tls-key-pair") pod "nmstate-webhook-866bcb46dc-hffc9" (UID: "35eb3c36-116b-4a21-9d5f-0a4c28011eb4") : secret "openshift-nmstate-webhook" not found Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.787037 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b"] Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.787699 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.793775 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.794135 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.794390 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-28tbb" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.795183 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jk2k\" (UniqueName: \"kubernetes.io/projected/2e5d7226-10e8-4b3f-b5ef-3d976c1f8462-kube-api-access-4jk2k\") pod \"nmstate-metrics-58c85c668d-88gd2\" (UID: \"2e5d7226-10e8-4b3f-b5ef-3d976c1f8462\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-88gd2" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.798855 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk8vv\" (UniqueName: \"kubernetes.io/projected/35eb3c36-116b-4a21-9d5f-0a4c28011eb4-kube-api-access-zk8vv\") pod \"nmstate-webhook-866bcb46dc-hffc9\" (UID: \"35eb3c36-116b-4a21-9d5f-0a4c28011eb4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hffc9" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.812720 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b"] Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.871629 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d11f19f3-34a3-42d7-b206-4c4377f903c1-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-bqn9b\" (UID: \"d11f19f3-34a3-42d7-b206-4c4377f903c1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.871672 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/853da1fb-12a1-4f09-969f-9e8e58ebafc1-ovs-socket\") pod \"nmstate-handler-nh5lq\" (UID: \"853da1fb-12a1-4f09-969f-9e8e58ebafc1\") " pod="openshift-nmstate/nmstate-handler-nh5lq" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.871712 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/853da1fb-12a1-4f09-969f-9e8e58ebafc1-dbus-socket\") pod \"nmstate-handler-nh5lq\" (UID: \"853da1fb-12a1-4f09-969f-9e8e58ebafc1\") " pod="openshift-nmstate/nmstate-handler-nh5lq" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.871727 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9r2h\" (UniqueName: \"kubernetes.io/projected/853da1fb-12a1-4f09-969f-9e8e58ebafc1-kube-api-access-d9r2h\") pod \"nmstate-handler-nh5lq\" (UID: \"853da1fb-12a1-4f09-969f-9e8e58ebafc1\") " pod="openshift-nmstate/nmstate-handler-nh5lq" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.871750 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlj9q\" (UniqueName: \"kubernetes.io/projected/d11f19f3-34a3-42d7-b206-4c4377f903c1-kube-api-access-dlj9q\") pod \"nmstate-console-plugin-5c78fc5d65-bqn9b\" (UID: \"d11f19f3-34a3-42d7-b206-4c4377f903c1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.871785 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/853da1fb-12a1-4f09-969f-9e8e58ebafc1-nmstate-lock\") pod \"nmstate-handler-nh5lq\" (UID: \"853da1fb-12a1-4f09-969f-9e8e58ebafc1\") " pod="openshift-nmstate/nmstate-handler-nh5lq" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.871808 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d11f19f3-34a3-42d7-b206-4c4377f903c1-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-bqn9b\" (UID: \"d11f19f3-34a3-42d7-b206-4c4377f903c1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.871883 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/853da1fb-12a1-4f09-969f-9e8e58ebafc1-ovs-socket\") pod \"nmstate-handler-nh5lq\" (UID: \"853da1fb-12a1-4f09-969f-9e8e58ebafc1\") " pod="openshift-nmstate/nmstate-handler-nh5lq" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.872154 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/853da1fb-12a1-4f09-969f-9e8e58ebafc1-dbus-socket\") pod \"nmstate-handler-nh5lq\" (UID: \"853da1fb-12a1-4f09-969f-9e8e58ebafc1\") " pod="openshift-nmstate/nmstate-handler-nh5lq" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.872193 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/853da1fb-12a1-4f09-969f-9e8e58ebafc1-nmstate-lock\") pod \"nmstate-handler-nh5lq\" (UID: \"853da1fb-12a1-4f09-969f-9e8e58ebafc1\") " pod="openshift-nmstate/nmstate-handler-nh5lq" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.893874 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9r2h\" (UniqueName: \"kubernetes.io/projected/853da1fb-12a1-4f09-969f-9e8e58ebafc1-kube-api-access-d9r2h\") pod \"nmstate-handler-nh5lq\" (UID: \"853da1fb-12a1-4f09-969f-9e8e58ebafc1\") " pod="openshift-nmstate/nmstate-handler-nh5lq" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.927581 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-88gd2" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.972663 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d11f19f3-34a3-42d7-b206-4c4377f903c1-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-bqn9b\" (UID: \"d11f19f3-34a3-42d7-b206-4c4377f903c1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.972769 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlj9q\" (UniqueName: \"kubernetes.io/projected/d11f19f3-34a3-42d7-b206-4c4377f903c1-kube-api-access-dlj9q\") pod \"nmstate-console-plugin-5c78fc5d65-bqn9b\" (UID: \"d11f19f3-34a3-42d7-b206-4c4377f903c1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.972831 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d11f19f3-34a3-42d7-b206-4c4377f903c1-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-bqn9b\" (UID: \"d11f19f3-34a3-42d7-b206-4c4377f903c1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b" Feb 17 17:55:53 crc kubenswrapper[4680]: E0217 17:55:53.972955 4680 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 17 17:55:53 crc kubenswrapper[4680]: E0217 17:55:53.973118 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d11f19f3-34a3-42d7-b206-4c4377f903c1-plugin-serving-cert podName:d11f19f3-34a3-42d7-b206-4c4377f903c1 nodeName:}" failed. No retries permitted until 2026-02-17 17:55:54.473097376 +0000 UTC m=+684.023996055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/d11f19f3-34a3-42d7-b206-4c4377f903c1-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-bqn9b" (UID: "d11f19f3-34a3-42d7-b206-4c4377f903c1") : secret "plugin-serving-cert" not found Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.973879 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d11f19f3-34a3-42d7-b206-4c4377f903c1-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-bqn9b\" (UID: \"d11f19f3-34a3-42d7-b206-4c4377f903c1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b" Feb 17 17:55:53 crc kubenswrapper[4680]: I0217 17:55:53.988610 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-nh5lq" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.000173 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlj9q\" (UniqueName: \"kubernetes.io/projected/d11f19f3-34a3-42d7-b206-4c4377f903c1-kube-api-access-dlj9q\") pod \"nmstate-console-plugin-5c78fc5d65-bqn9b\" (UID: \"d11f19f3-34a3-42d7-b206-4c4377f903c1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.019494 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7c558c8d84-wg8bz"] Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.020872 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.071855 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c558c8d84-wg8bz"] Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.074145 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-oauth-serving-cert\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.074201 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-console-oauth-config\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.074268 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-console-config\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.074311 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmvs9\" (UniqueName: \"kubernetes.io/projected/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-kube-api-access-bmvs9\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.074480 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-console-serving-cert\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.074538 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-service-ca\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.074562 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-trusted-ca-bundle\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.177094 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-console-config\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.177441 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmvs9\" (UniqueName: \"kubernetes.io/projected/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-kube-api-access-bmvs9\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.177477 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-console-serving-cert\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.177515 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-service-ca\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.177533 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-trusted-ca-bundle\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.177565 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-oauth-serving-cert\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.177594 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-console-oauth-config\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.178561 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-oauth-serving-cert\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.178766 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-service-ca\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.179605 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-trusted-ca-bundle\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.179948 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-console-config\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.182273 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-console-oauth-config\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.182611 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-console-serving-cert\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.196798 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmvs9\" (UniqueName: \"kubernetes.io/projected/1558918d-0e9d-4fb8-9eca-e1c2e54122d1-kube-api-access-bmvs9\") pod \"console-7c558c8d84-wg8bz\" (UID: \"1558918d-0e9d-4fb8-9eca-e1c2e54122d1\") " pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.278192 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/35eb3c36-116b-4a21-9d5f-0a4c28011eb4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-hffc9\" (UID: \"35eb3c36-116b-4a21-9d5f-0a4c28011eb4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hffc9" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.281313 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/35eb3c36-116b-4a21-9d5f-0a4c28011eb4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-hffc9\" (UID: \"35eb3c36-116b-4a21-9d5f-0a4c28011eb4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hffc9" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.344939 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.438035 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-88gd2"] Feb 17 17:55:54 crc kubenswrapper[4680]: W0217 17:55:54.440688 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e5d7226_10e8_4b3f_b5ef_3d976c1f8462.slice/crio-e2d77d739452bfb56df168d9b22730ad6e43e4889616889bcfb7e2c930fcd176 WatchSource:0}: Error finding container e2d77d739452bfb56df168d9b22730ad6e43e4889616889bcfb7e2c930fcd176: Status 404 returned error can't find the container with id e2d77d739452bfb56df168d9b22730ad6e43e4889616889bcfb7e2c930fcd176 Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.480940 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d11f19f3-34a3-42d7-b206-4c4377f903c1-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-bqn9b\" (UID: \"d11f19f3-34a3-42d7-b206-4c4377f903c1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.489623 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d11f19f3-34a3-42d7-b206-4c4377f903c1-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-bqn9b\" (UID: \"d11f19f3-34a3-42d7-b206-4c4377f903c1\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.539640 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hffc9" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.637255 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-88gd2" event={"ID":"2e5d7226-10e8-4b3f-b5ef-3d976c1f8462","Type":"ContainerStarted","Data":"e2d77d739452bfb56df168d9b22730ad6e43e4889616889bcfb7e2c930fcd176"} Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.638062 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-nh5lq" event={"ID":"853da1fb-12a1-4f09-969f-9e8e58ebafc1","Type":"ContainerStarted","Data":"08876418aec63f26d46e815fd560a80f8873fd103b32e27fff19421eb9630327"} Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.716204 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-hffc9"] Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.725972 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c558c8d84-wg8bz"] Feb 17 17:55:54 crc kubenswrapper[4680]: W0217 17:55:54.732293 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1558918d_0e9d_4fb8_9eca_e1c2e54122d1.slice/crio-855d48d5547942bfca5537f177c2aaf70aa55ece1b54e63dc62f67d74977f78e WatchSource:0}: Error finding container 855d48d5547942bfca5537f177c2aaf70aa55ece1b54e63dc62f67d74977f78e: Status 404 returned error can't find the container with id 855d48d5547942bfca5537f177c2aaf70aa55ece1b54e63dc62f67d74977f78e Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.737690 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b" Feb 17 17:55:54 crc kubenswrapper[4680]: I0217 17:55:54.965092 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b"] Feb 17 17:55:55 crc kubenswrapper[4680]: I0217 17:55:55.644159 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b" event={"ID":"d11f19f3-34a3-42d7-b206-4c4377f903c1","Type":"ContainerStarted","Data":"f9d951877a6ca0ac7d034b08c33b4ee26167c52795bb0684aa8101d61daee17f"} Feb 17 17:55:55 crc kubenswrapper[4680]: I0217 17:55:55.646659 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c558c8d84-wg8bz" event={"ID":"1558918d-0e9d-4fb8-9eca-e1c2e54122d1","Type":"ContainerStarted","Data":"e9df0b2b352078fb23560c277c8aa2cf3e09556b5a167821abcac80434b7cccd"} Feb 17 17:55:55 crc kubenswrapper[4680]: I0217 17:55:55.646757 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c558c8d84-wg8bz" event={"ID":"1558918d-0e9d-4fb8-9eca-e1c2e54122d1","Type":"ContainerStarted","Data":"855d48d5547942bfca5537f177c2aaf70aa55ece1b54e63dc62f67d74977f78e"} Feb 17 17:55:55 crc kubenswrapper[4680]: I0217 17:55:55.649878 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hffc9" event={"ID":"35eb3c36-116b-4a21-9d5f-0a4c28011eb4","Type":"ContainerStarted","Data":"0f9a939525c72714c499dd8e8d18fd5d35bcb0eb70c2e89e76fc25c869a27b18"} Feb 17 17:55:55 crc kubenswrapper[4680]: I0217 17:55:55.667082 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7c558c8d84-wg8bz" podStartSLOduration=2.667061163 podStartE2EDuration="2.667061163s" podCreationTimestamp="2026-02-17 17:55:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:55:55.66314908 +0000 UTC m=+685.214047759" watchObservedRunningTime="2026-02-17 17:55:55.667061163 +0000 UTC m=+685.217959852" Feb 17 17:55:57 crc kubenswrapper[4680]: I0217 17:55:57.668618 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-88gd2" event={"ID":"2e5d7226-10e8-4b3f-b5ef-3d976c1f8462","Type":"ContainerStarted","Data":"82030506cb0308d0b777e839680b01d37ad1e54cad68879205c926be553cb222"} Feb 17 17:55:57 crc kubenswrapper[4680]: I0217 17:55:57.670592 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hffc9" event={"ID":"35eb3c36-116b-4a21-9d5f-0a4c28011eb4","Type":"ContainerStarted","Data":"b6afc92e134b32cc1618dc7fc1fd7a7fbb931d116ec582110f32c3545267237c"} Feb 17 17:55:57 crc kubenswrapper[4680]: I0217 17:55:57.670720 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hffc9" Feb 17 17:55:57 crc kubenswrapper[4680]: I0217 17:55:57.672000 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b" event={"ID":"d11f19f3-34a3-42d7-b206-4c4377f903c1","Type":"ContainerStarted","Data":"99a74df9cbf3711c323e1c031b09939a8ee91d5aa9714eb8c46b440a3a608dd0"} Feb 17 17:55:57 crc kubenswrapper[4680]: I0217 17:55:57.673999 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-nh5lq" event={"ID":"853da1fb-12a1-4f09-969f-9e8e58ebafc1","Type":"ContainerStarted","Data":"10ee69c38cec90d65c45ac1cf507f9949a82d4231929d48ac23b142a7896bb1c"} Feb 17 17:55:57 crc kubenswrapper[4680]: I0217 17:55:57.674364 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-nh5lq" Feb 17 17:55:57 crc kubenswrapper[4680]: I0217 17:55:57.689410 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hffc9" podStartSLOduration=2.788380523 podStartE2EDuration="4.689390164s" podCreationTimestamp="2026-02-17 17:55:53 +0000 UTC" firstStartedPulling="2026-02-17 17:55:54.725169331 +0000 UTC m=+684.276068010" lastFinishedPulling="2026-02-17 17:55:56.626178962 +0000 UTC m=+686.177077651" observedRunningTime="2026-02-17 17:55:57.685270524 +0000 UTC m=+687.236169203" watchObservedRunningTime="2026-02-17 17:55:57.689390164 +0000 UTC m=+687.240288843" Feb 17 17:55:57 crc kubenswrapper[4680]: I0217 17:55:57.719284 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bqn9b" podStartSLOduration=2.235938457 podStartE2EDuration="4.719266209s" podCreationTimestamp="2026-02-17 17:55:53 +0000 UTC" firstStartedPulling="2026-02-17 17:55:54.975009519 +0000 UTC m=+684.525908198" lastFinishedPulling="2026-02-17 17:55:57.458337281 +0000 UTC m=+687.009235950" observedRunningTime="2026-02-17 17:55:57.704060219 +0000 UTC m=+687.254958898" watchObservedRunningTime="2026-02-17 17:55:57.719266209 +0000 UTC m=+687.270164888" Feb 17 17:55:57 crc kubenswrapper[4680]: I0217 17:55:57.737766 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-nh5lq" podStartSLOduration=2.187444603 podStartE2EDuration="4.737747475s" podCreationTimestamp="2026-02-17 17:55:53 +0000 UTC" firstStartedPulling="2026-02-17 17:55:54.046186699 +0000 UTC m=+683.597085378" lastFinishedPulling="2026-02-17 17:55:56.596489571 +0000 UTC m=+686.147388250" observedRunningTime="2026-02-17 17:55:57.735174664 +0000 UTC m=+687.286073363" watchObservedRunningTime="2026-02-17 17:55:57.737747475 +0000 UTC m=+687.288646154" Feb 17 17:55:59 crc kubenswrapper[4680]: I0217 17:55:59.693959 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-88gd2" event={"ID":"2e5d7226-10e8-4b3f-b5ef-3d976c1f8462","Type":"ContainerStarted","Data":"59de1bc9819916354abae776fc6a5157ca52b2bd63ae79c22b52900fccff0514"} Feb 17 17:55:59 crc kubenswrapper[4680]: I0217 17:55:59.713083 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-88gd2" podStartSLOduration=2.416015666 podStartE2EDuration="6.713067207s" podCreationTimestamp="2026-02-17 17:55:53 +0000 UTC" firstStartedPulling="2026-02-17 17:55:54.443489394 +0000 UTC m=+683.994388073" lastFinishedPulling="2026-02-17 17:55:58.740540935 +0000 UTC m=+688.291439614" observedRunningTime="2026-02-17 17:55:59.708327667 +0000 UTC m=+689.259226346" watchObservedRunningTime="2026-02-17 17:55:59.713067207 +0000 UTC m=+689.263965886" Feb 17 17:56:04 crc kubenswrapper[4680]: I0217 17:56:04.010183 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-nh5lq" Feb 17 17:56:04 crc kubenswrapper[4680]: I0217 17:56:04.346236 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:56:04 crc kubenswrapper[4680]: I0217 17:56:04.346546 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:56:04 crc kubenswrapper[4680]: I0217 17:56:04.350389 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:56:04 crc kubenswrapper[4680]: I0217 17:56:04.722469 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7c558c8d84-wg8bz" Feb 17 17:56:04 crc kubenswrapper[4680]: I0217 17:56:04.762173 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-t9gdn"] Feb 17 17:56:05 crc kubenswrapper[4680]: I0217 17:56:05.588654 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:56:05 crc kubenswrapper[4680]: I0217 17:56:05.588726 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:56:14 crc kubenswrapper[4680]: I0217 17:56:14.545205 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hffc9" Feb 17 17:56:27 crc kubenswrapper[4680]: I0217 17:56:27.360031 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk"] Feb 17 17:56:27 crc kubenswrapper[4680]: I0217 17:56:27.361945 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" Feb 17 17:56:27 crc kubenswrapper[4680]: I0217 17:56:27.364759 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 17:56:27 crc kubenswrapper[4680]: I0217 17:56:27.377608 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk"] Feb 17 17:56:27 crc kubenswrapper[4680]: I0217 17:56:27.399933 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c7369ac-8367-419f-b50f-82d0ba1cef96-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk\" (UID: \"9c7369ac-8367-419f-b50f-82d0ba1cef96\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" Feb 17 17:56:27 crc kubenswrapper[4680]: I0217 17:56:27.400097 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c7369ac-8367-419f-b50f-82d0ba1cef96-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk\" (UID: \"9c7369ac-8367-419f-b50f-82d0ba1cef96\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" Feb 17 17:56:27 crc kubenswrapper[4680]: I0217 17:56:27.400230 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwjbv\" (UniqueName: \"kubernetes.io/projected/9c7369ac-8367-419f-b50f-82d0ba1cef96-kube-api-access-pwjbv\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk\" (UID: \"9c7369ac-8367-419f-b50f-82d0ba1cef96\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" Feb 17 17:56:27 crc kubenswrapper[4680]: I0217 17:56:27.501575 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c7369ac-8367-419f-b50f-82d0ba1cef96-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk\" (UID: \"9c7369ac-8367-419f-b50f-82d0ba1cef96\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" Feb 17 17:56:27 crc kubenswrapper[4680]: I0217 17:56:27.501628 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c7369ac-8367-419f-b50f-82d0ba1cef96-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk\" (UID: \"9c7369ac-8367-419f-b50f-82d0ba1cef96\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" Feb 17 17:56:27 crc kubenswrapper[4680]: I0217 17:56:27.501658 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwjbv\" (UniqueName: \"kubernetes.io/projected/9c7369ac-8367-419f-b50f-82d0ba1cef96-kube-api-access-pwjbv\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk\" (UID: \"9c7369ac-8367-419f-b50f-82d0ba1cef96\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" Feb 17 17:56:27 crc kubenswrapper[4680]: I0217 17:56:27.502050 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c7369ac-8367-419f-b50f-82d0ba1cef96-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk\" (UID: \"9c7369ac-8367-419f-b50f-82d0ba1cef96\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" Feb 17 17:56:27 crc kubenswrapper[4680]: I0217 17:56:27.502351 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c7369ac-8367-419f-b50f-82d0ba1cef96-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk\" (UID: \"9c7369ac-8367-419f-b50f-82d0ba1cef96\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" Feb 17 17:56:27 crc kubenswrapper[4680]: I0217 17:56:27.519417 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwjbv\" (UniqueName: \"kubernetes.io/projected/9c7369ac-8367-419f-b50f-82d0ba1cef96-kube-api-access-pwjbv\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk\" (UID: \"9c7369ac-8367-419f-b50f-82d0ba1cef96\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" Feb 17 17:56:27 crc kubenswrapper[4680]: I0217 17:56:27.677116 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" Feb 17 17:56:28 crc kubenswrapper[4680]: I0217 17:56:28.054640 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk"] Feb 17 17:56:28 crc kubenswrapper[4680]: I0217 17:56:28.863872 4680 generic.go:334] "Generic (PLEG): container finished" podID="9c7369ac-8367-419f-b50f-82d0ba1cef96" containerID="44a72fd7b91b0be0bf71c7235856d9d43b12af91b473de878c8c9da54e85fb5b" exitCode=0 Feb 17 17:56:28 crc kubenswrapper[4680]: I0217 17:56:28.863946 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" event={"ID":"9c7369ac-8367-419f-b50f-82d0ba1cef96","Type":"ContainerDied","Data":"44a72fd7b91b0be0bf71c7235856d9d43b12af91b473de878c8c9da54e85fb5b"} Feb 17 17:56:28 crc kubenswrapper[4680]: I0217 17:56:28.864126 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" event={"ID":"9c7369ac-8367-419f-b50f-82d0ba1cef96","Type":"ContainerStarted","Data":"e6b859569adb90f74a7838f0a3573b88a0da65a62d5f3e4ba4634c85063da0da"} Feb 17 17:56:29 crc kubenswrapper[4680]: I0217 17:56:29.815084 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-t9gdn" podUID="ed1c57d6-3991-4ebb-9149-63e38e19dc40" containerName="console" containerID="cri-o://4cc154261124020a4b7be820cd9620dbc115d3fcd61329eaab6df60d6da8920e" gracePeriod=15 Feb 17 17:56:29 crc kubenswrapper[4680]: I0217 17:56:29.876804 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" event={"ID":"9c7369ac-8367-419f-b50f-82d0ba1cef96","Type":"ContainerStarted","Data":"3f29c7e56e3c20146efdfac62bd53ed364614b1927bdd893e3906be761120261"} Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.217154 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-t9gdn_ed1c57d6-3991-4ebb-9149-63e38e19dc40/console/0.log" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.217226 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.343262 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-oauth-config\") pod \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.343343 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-trusted-ca-bundle\") pod \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.343384 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-oauth-serving-cert\") pod \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.343412 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-service-ca\") pod \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.343441 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-serving-cert\") pod \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.343492 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8rds\" (UniqueName: \"kubernetes.io/projected/ed1c57d6-3991-4ebb-9149-63e38e19dc40-kube-api-access-q8rds\") pod \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.343519 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-config\") pod \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\" (UID: \"ed1c57d6-3991-4ebb-9149-63e38e19dc40\") " Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.344548 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-service-ca" (OuterVolumeSpecName: "service-ca") pod "ed1c57d6-3991-4ebb-9149-63e38e19dc40" (UID: "ed1c57d6-3991-4ebb-9149-63e38e19dc40"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.344577 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ed1c57d6-3991-4ebb-9149-63e38e19dc40" (UID: "ed1c57d6-3991-4ebb-9149-63e38e19dc40"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.344642 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-config" (OuterVolumeSpecName: "console-config") pod "ed1c57d6-3991-4ebb-9149-63e38e19dc40" (UID: "ed1c57d6-3991-4ebb-9149-63e38e19dc40"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.345653 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ed1c57d6-3991-4ebb-9149-63e38e19dc40" (UID: "ed1c57d6-3991-4ebb-9149-63e38e19dc40"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.349580 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ed1c57d6-3991-4ebb-9149-63e38e19dc40" (UID: "ed1c57d6-3991-4ebb-9149-63e38e19dc40"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.349623 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed1c57d6-3991-4ebb-9149-63e38e19dc40-kube-api-access-q8rds" (OuterVolumeSpecName: "kube-api-access-q8rds") pod "ed1c57d6-3991-4ebb-9149-63e38e19dc40" (UID: "ed1c57d6-3991-4ebb-9149-63e38e19dc40"). InnerVolumeSpecName "kube-api-access-q8rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.349886 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ed1c57d6-3991-4ebb-9149-63e38e19dc40" (UID: "ed1c57d6-3991-4ebb-9149-63e38e19dc40"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.444466 4680 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.444510 4680 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.444527 4680 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.444538 4680 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.444549 4680 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.444560 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8rds\" (UniqueName: \"kubernetes.io/projected/ed1c57d6-3991-4ebb-9149-63e38e19dc40-kube-api-access-q8rds\") on node \"crc\" DevicePath \"\"" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.444572 4680 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed1c57d6-3991-4ebb-9149-63e38e19dc40-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.891158 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-t9gdn_ed1c57d6-3991-4ebb-9149-63e38e19dc40/console/0.log" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.891202 4680 generic.go:334] "Generic (PLEG): container finished" podID="ed1c57d6-3991-4ebb-9149-63e38e19dc40" containerID="4cc154261124020a4b7be820cd9620dbc115d3fcd61329eaab6df60d6da8920e" exitCode=2 Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.891235 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t9gdn" event={"ID":"ed1c57d6-3991-4ebb-9149-63e38e19dc40","Type":"ContainerDied","Data":"4cc154261124020a4b7be820cd9620dbc115d3fcd61329eaab6df60d6da8920e"} Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.891254 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t9gdn" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.891272 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t9gdn" event={"ID":"ed1c57d6-3991-4ebb-9149-63e38e19dc40","Type":"ContainerDied","Data":"1e73b22bbdc2da4dee523d1a74095171d078fc7e3bb3ae122f38bfd1a4e4ab90"} Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.891298 4680 scope.go:117] "RemoveContainer" containerID="4cc154261124020a4b7be820cd9620dbc115d3fcd61329eaab6df60d6da8920e" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.894546 4680 generic.go:334] "Generic (PLEG): container finished" podID="9c7369ac-8367-419f-b50f-82d0ba1cef96" containerID="3f29c7e56e3c20146efdfac62bd53ed364614b1927bdd893e3906be761120261" exitCode=0 Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.894571 4680 generic.go:334] "Generic (PLEG): container finished" podID="9c7369ac-8367-419f-b50f-82d0ba1cef96" containerID="632d3d521f24863961cd3abb350d2195cd791ad86415717250fc785efcb180dd" exitCode=0 Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.894590 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" event={"ID":"9c7369ac-8367-419f-b50f-82d0ba1cef96","Type":"ContainerDied","Data":"3f29c7e56e3c20146efdfac62bd53ed364614b1927bdd893e3906be761120261"} Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.894614 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" event={"ID":"9c7369ac-8367-419f-b50f-82d0ba1cef96","Type":"ContainerDied","Data":"632d3d521f24863961cd3abb350d2195cd791ad86415717250fc785efcb180dd"} Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.913665 4680 scope.go:117] "RemoveContainer" containerID="4cc154261124020a4b7be820cd9620dbc115d3fcd61329eaab6df60d6da8920e" Feb 17 17:56:30 crc kubenswrapper[4680]: E0217 17:56:30.914639 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cc154261124020a4b7be820cd9620dbc115d3fcd61329eaab6df60d6da8920e\": container with ID starting with 4cc154261124020a4b7be820cd9620dbc115d3fcd61329eaab6df60d6da8920e not found: ID does not exist" containerID="4cc154261124020a4b7be820cd9620dbc115d3fcd61329eaab6df60d6da8920e" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.914698 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cc154261124020a4b7be820cd9620dbc115d3fcd61329eaab6df60d6da8920e"} err="failed to get container status \"4cc154261124020a4b7be820cd9620dbc115d3fcd61329eaab6df60d6da8920e\": rpc error: code = NotFound desc = could not find container \"4cc154261124020a4b7be820cd9620dbc115d3fcd61329eaab6df60d6da8920e\": container with ID starting with 4cc154261124020a4b7be820cd9620dbc115d3fcd61329eaab6df60d6da8920e not found: ID does not exist" Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.923144 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-t9gdn"] Feb 17 17:56:30 crc kubenswrapper[4680]: I0217 17:56:30.927121 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-t9gdn"] Feb 17 17:56:31 crc kubenswrapper[4680]: I0217 17:56:31.112474 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed1c57d6-3991-4ebb-9149-63e38e19dc40" path="/var/lib/kubelet/pods/ed1c57d6-3991-4ebb-9149-63e38e19dc40/volumes" Feb 17 17:56:32 crc kubenswrapper[4680]: I0217 17:56:32.126406 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" Feb 17 17:56:32 crc kubenswrapper[4680]: I0217 17:56:32.168307 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c7369ac-8367-419f-b50f-82d0ba1cef96-bundle\") pod \"9c7369ac-8367-419f-b50f-82d0ba1cef96\" (UID: \"9c7369ac-8367-419f-b50f-82d0ba1cef96\") " Feb 17 17:56:32 crc kubenswrapper[4680]: I0217 17:56:32.168480 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c7369ac-8367-419f-b50f-82d0ba1cef96-util\") pod \"9c7369ac-8367-419f-b50f-82d0ba1cef96\" (UID: \"9c7369ac-8367-419f-b50f-82d0ba1cef96\") " Feb 17 17:56:32 crc kubenswrapper[4680]: I0217 17:56:32.168591 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwjbv\" (UniqueName: \"kubernetes.io/projected/9c7369ac-8367-419f-b50f-82d0ba1cef96-kube-api-access-pwjbv\") pod \"9c7369ac-8367-419f-b50f-82d0ba1cef96\" (UID: \"9c7369ac-8367-419f-b50f-82d0ba1cef96\") " Feb 17 17:56:32 crc kubenswrapper[4680]: I0217 17:56:32.170338 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c7369ac-8367-419f-b50f-82d0ba1cef96-bundle" (OuterVolumeSpecName: "bundle") pod "9c7369ac-8367-419f-b50f-82d0ba1cef96" (UID: "9c7369ac-8367-419f-b50f-82d0ba1cef96"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:56:32 crc kubenswrapper[4680]: I0217 17:56:32.173818 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c7369ac-8367-419f-b50f-82d0ba1cef96-kube-api-access-pwjbv" (OuterVolumeSpecName: "kube-api-access-pwjbv") pod "9c7369ac-8367-419f-b50f-82d0ba1cef96" (UID: "9c7369ac-8367-419f-b50f-82d0ba1cef96"). InnerVolumeSpecName "kube-api-access-pwjbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:56:32 crc kubenswrapper[4680]: I0217 17:56:32.185538 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c7369ac-8367-419f-b50f-82d0ba1cef96-util" (OuterVolumeSpecName: "util") pod "9c7369ac-8367-419f-b50f-82d0ba1cef96" (UID: "9c7369ac-8367-419f-b50f-82d0ba1cef96"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:56:32 crc kubenswrapper[4680]: I0217 17:56:32.269770 4680 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c7369ac-8367-419f-b50f-82d0ba1cef96-util\") on node \"crc\" DevicePath \"\"" Feb 17 17:56:32 crc kubenswrapper[4680]: I0217 17:56:32.269807 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwjbv\" (UniqueName: \"kubernetes.io/projected/9c7369ac-8367-419f-b50f-82d0ba1cef96-kube-api-access-pwjbv\") on node \"crc\" DevicePath \"\"" Feb 17 17:56:32 crc kubenswrapper[4680]: I0217 17:56:32.269819 4680 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c7369ac-8367-419f-b50f-82d0ba1cef96-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:56:32 crc kubenswrapper[4680]: I0217 17:56:32.905311 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" event={"ID":"9c7369ac-8367-419f-b50f-82d0ba1cef96","Type":"ContainerDied","Data":"e6b859569adb90f74a7838f0a3573b88a0da65a62d5f3e4ba4634c85063da0da"} Feb 17 17:56:32 crc kubenswrapper[4680]: I0217 17:56:32.905366 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6b859569adb90f74a7838f0a3573b88a0da65a62d5f3e4ba4634c85063da0da" Feb 17 17:56:32 crc kubenswrapper[4680]: I0217 17:56:32.905377 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk" Feb 17 17:56:35 crc kubenswrapper[4680]: I0217 17:56:35.588769 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:56:35 crc kubenswrapper[4680]: I0217 17:56:35.589158 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.079547 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk"] Feb 17 17:56:42 crc kubenswrapper[4680]: E0217 17:56:42.080355 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed1c57d6-3991-4ebb-9149-63e38e19dc40" containerName="console" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.080370 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed1c57d6-3991-4ebb-9149-63e38e19dc40" containerName="console" Feb 17 17:56:42 crc kubenswrapper[4680]: E0217 17:56:42.080389 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c7369ac-8367-419f-b50f-82d0ba1cef96" containerName="util" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.080396 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c7369ac-8367-419f-b50f-82d0ba1cef96" containerName="util" Feb 17 17:56:42 crc kubenswrapper[4680]: E0217 17:56:42.080421 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c7369ac-8367-419f-b50f-82d0ba1cef96" containerName="pull" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.080430 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c7369ac-8367-419f-b50f-82d0ba1cef96" containerName="pull" Feb 17 17:56:42 crc kubenswrapper[4680]: E0217 17:56:42.080448 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c7369ac-8367-419f-b50f-82d0ba1cef96" containerName="extract" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.080455 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c7369ac-8367-419f-b50f-82d0ba1cef96" containerName="extract" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.080570 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c7369ac-8367-419f-b50f-82d0ba1cef96" containerName="extract" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.080582 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed1c57d6-3991-4ebb-9149-63e38e19dc40" containerName="console" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.081027 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.084825 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.085053 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.085210 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fnjng" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.085403 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.086030 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.141396 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk"] Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.235198 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bcb1b2bb-a7d3-41f4-a898-c563fceb659f-apiservice-cert\") pod \"metallb-operator-controller-manager-847c44f7f8-ct2vk\" (UID: \"bcb1b2bb-a7d3-41f4-a898-c563fceb659f\") " pod="metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.235276 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bcb1b2bb-a7d3-41f4-a898-c563fceb659f-webhook-cert\") pod \"metallb-operator-controller-manager-847c44f7f8-ct2vk\" (UID: \"bcb1b2bb-a7d3-41f4-a898-c563fceb659f\") " pod="metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.235382 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46cms\" (UniqueName: \"kubernetes.io/projected/bcb1b2bb-a7d3-41f4-a898-c563fceb659f-kube-api-access-46cms\") pod \"metallb-operator-controller-manager-847c44f7f8-ct2vk\" (UID: \"bcb1b2bb-a7d3-41f4-a898-c563fceb659f\") " pod="metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.336350 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bcb1b2bb-a7d3-41f4-a898-c563fceb659f-apiservice-cert\") pod \"metallb-operator-controller-manager-847c44f7f8-ct2vk\" (UID: \"bcb1b2bb-a7d3-41f4-a898-c563fceb659f\") " pod="metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.336426 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bcb1b2bb-a7d3-41f4-a898-c563fceb659f-webhook-cert\") pod \"metallb-operator-controller-manager-847c44f7f8-ct2vk\" (UID: \"bcb1b2bb-a7d3-41f4-a898-c563fceb659f\") " pod="metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.336475 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46cms\" (UniqueName: \"kubernetes.io/projected/bcb1b2bb-a7d3-41f4-a898-c563fceb659f-kube-api-access-46cms\") pod \"metallb-operator-controller-manager-847c44f7f8-ct2vk\" (UID: \"bcb1b2bb-a7d3-41f4-a898-c563fceb659f\") " pod="metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.343377 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bcb1b2bb-a7d3-41f4-a898-c563fceb659f-apiservice-cert\") pod \"metallb-operator-controller-manager-847c44f7f8-ct2vk\" (UID: \"bcb1b2bb-a7d3-41f4-a898-c563fceb659f\") " pod="metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.355102 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bcb1b2bb-a7d3-41f4-a898-c563fceb659f-webhook-cert\") pod \"metallb-operator-controller-manager-847c44f7f8-ct2vk\" (UID: \"bcb1b2bb-a7d3-41f4-a898-c563fceb659f\") " pod="metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.360003 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46cms\" (UniqueName: \"kubernetes.io/projected/bcb1b2bb-a7d3-41f4-a898-c563fceb659f-kube-api-access-46cms\") pod \"metallb-operator-controller-manager-847c44f7f8-ct2vk\" (UID: \"bcb1b2bb-a7d3-41f4-a898-c563fceb659f\") " pod="metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.397836 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.442244 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr"] Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.443671 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.451297 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-pxwpt" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.451592 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.455110 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.461041 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr"] Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.539236 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58nmr\" (UniqueName: \"kubernetes.io/projected/07691e4d-6667-4947-a3e3-de4d4d403aa5-kube-api-access-58nmr\") pod \"metallb-operator-webhook-server-c4d794488-wgkbr\" (UID: \"07691e4d-6667-4947-a3e3-de4d4d403aa5\") " pod="metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.539332 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/07691e4d-6667-4947-a3e3-de4d4d403aa5-webhook-cert\") pod \"metallb-operator-webhook-server-c4d794488-wgkbr\" (UID: \"07691e4d-6667-4947-a3e3-de4d4d403aa5\") " pod="metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.539394 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/07691e4d-6667-4947-a3e3-de4d4d403aa5-apiservice-cert\") pod \"metallb-operator-webhook-server-c4d794488-wgkbr\" (UID: \"07691e4d-6667-4947-a3e3-de4d4d403aa5\") " pod="metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.640281 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58nmr\" (UniqueName: \"kubernetes.io/projected/07691e4d-6667-4947-a3e3-de4d4d403aa5-kube-api-access-58nmr\") pod \"metallb-operator-webhook-server-c4d794488-wgkbr\" (UID: \"07691e4d-6667-4947-a3e3-de4d4d403aa5\") " pod="metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.640382 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/07691e4d-6667-4947-a3e3-de4d4d403aa5-webhook-cert\") pod \"metallb-operator-webhook-server-c4d794488-wgkbr\" (UID: \"07691e4d-6667-4947-a3e3-de4d4d403aa5\") " pod="metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.640416 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/07691e4d-6667-4947-a3e3-de4d4d403aa5-apiservice-cert\") pod \"metallb-operator-webhook-server-c4d794488-wgkbr\" (UID: \"07691e4d-6667-4947-a3e3-de4d4d403aa5\") " pod="metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.647529 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/07691e4d-6667-4947-a3e3-de4d4d403aa5-webhook-cert\") pod \"metallb-operator-webhook-server-c4d794488-wgkbr\" (UID: \"07691e4d-6667-4947-a3e3-de4d4d403aa5\") " pod="metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.659034 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/07691e4d-6667-4947-a3e3-de4d4d403aa5-apiservice-cert\") pod \"metallb-operator-webhook-server-c4d794488-wgkbr\" (UID: \"07691e4d-6667-4947-a3e3-de4d4d403aa5\") " pod="metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr" Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.678099 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58nmr\" (UniqueName: \"kubernetes.io/projected/07691e4d-6667-4947-a3e3-de4d4d403aa5-kube-api-access-58nmr\") pod \"metallb-operator-webhook-server-c4d794488-wgkbr\" (UID: \"07691e4d-6667-4947-a3e3-de4d4d403aa5\") " pod="metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr" Feb 17 17:56:42 crc kubenswrapper[4680]: W0217 17:56:42.767074 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcb1b2bb_a7d3_41f4_a898_c563fceb659f.slice/crio-0de8e26fdee812455a68868c03a8f450b4623e21cd7f900a5d8d6dd7c9487603 WatchSource:0}: Error finding container 0de8e26fdee812455a68868c03a8f450b4623e21cd7f900a5d8d6dd7c9487603: Status 404 returned error can't find the container with id 0de8e26fdee812455a68868c03a8f450b4623e21cd7f900a5d8d6dd7c9487603 Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.769607 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk"] Feb 17 17:56:42 crc kubenswrapper[4680]: I0217 17:56:42.778852 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr" Feb 17 17:56:43 crc kubenswrapper[4680]: I0217 17:56:43.074050 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk" event={"ID":"bcb1b2bb-a7d3-41f4-a898-c563fceb659f","Type":"ContainerStarted","Data":"0de8e26fdee812455a68868c03a8f450b4623e21cd7f900a5d8d6dd7c9487603"} Feb 17 17:56:43 crc kubenswrapper[4680]: I0217 17:56:43.310902 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr"] Feb 17 17:56:43 crc kubenswrapper[4680]: W0217 17:56:43.316786 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07691e4d_6667_4947_a3e3_de4d4d403aa5.slice/crio-317890a4bdfabc25943a96320b4e036111d43432a6f70945e24ada4740f019e1 WatchSource:0}: Error finding container 317890a4bdfabc25943a96320b4e036111d43432a6f70945e24ada4740f019e1: Status 404 returned error can't find the container with id 317890a4bdfabc25943a96320b4e036111d43432a6f70945e24ada4740f019e1 Feb 17 17:56:44 crc kubenswrapper[4680]: I0217 17:56:44.081718 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr" event={"ID":"07691e4d-6667-4947-a3e3-de4d4d403aa5","Type":"ContainerStarted","Data":"317890a4bdfabc25943a96320b4e036111d43432a6f70945e24ada4740f019e1"} Feb 17 17:56:45 crc kubenswrapper[4680]: I0217 17:56:45.748287 4680 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 17:56:47 crc kubenswrapper[4680]: I0217 17:56:47.112148 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk" event={"ID":"bcb1b2bb-a7d3-41f4-a898-c563fceb659f","Type":"ContainerStarted","Data":"31189fd6398ba003309781e4c2ea5440a16c433c608c1615bb64780a256dc34a"} Feb 17 17:56:47 crc kubenswrapper[4680]: I0217 17:56:47.112406 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk" Feb 17 17:56:47 crc kubenswrapper[4680]: I0217 17:56:47.126969 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk" podStartSLOduration=1.380125698 podStartE2EDuration="5.126949081s" podCreationTimestamp="2026-02-17 17:56:42 +0000 UTC" firstStartedPulling="2026-02-17 17:56:42.773518521 +0000 UTC m=+732.324417200" lastFinishedPulling="2026-02-17 17:56:46.520341894 +0000 UTC m=+736.071240583" observedRunningTime="2026-02-17 17:56:47.124637387 +0000 UTC m=+736.675536066" watchObservedRunningTime="2026-02-17 17:56:47.126949081 +0000 UTC m=+736.677847760" Feb 17 17:56:50 crc kubenswrapper[4680]: I0217 17:56:50.131687 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr" event={"ID":"07691e4d-6667-4947-a3e3-de4d4d403aa5","Type":"ContainerStarted","Data":"d699d42bdb0eb1d0c9a2d3ea6b92a728ab68e6e2fbc80141fc8567af13ccd156"} Feb 17 17:56:50 crc kubenswrapper[4680]: I0217 17:56:50.132001 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr" Feb 17 17:56:50 crc kubenswrapper[4680]: I0217 17:56:50.159404 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr" podStartSLOduration=2.48063185 podStartE2EDuration="8.159371129s" podCreationTimestamp="2026-02-17 17:56:42 +0000 UTC" firstStartedPulling="2026-02-17 17:56:43.320228913 +0000 UTC m=+732.871127582" lastFinishedPulling="2026-02-17 17:56:48.998968192 +0000 UTC m=+738.549866861" observedRunningTime="2026-02-17 17:56:50.157640934 +0000 UTC m=+739.708539613" watchObservedRunningTime="2026-02-17 17:56:50.159371129 +0000 UTC m=+739.710269808" Feb 17 17:57:02 crc kubenswrapper[4680]: I0217 17:57:02.784026 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-c4d794488-wgkbr" Feb 17 17:57:05 crc kubenswrapper[4680]: I0217 17:57:05.589294 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:57:05 crc kubenswrapper[4680]: I0217 17:57:05.589435 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:57:05 crc kubenswrapper[4680]: I0217 17:57:05.589488 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 17:57:05 crc kubenswrapper[4680]: I0217 17:57:05.590131 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2c9e5cbe14222efd0ffdc11f960ea7f538fdce4e27dc297816e521875a655d0"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:57:05 crc kubenswrapper[4680]: I0217 17:57:05.590192 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://b2c9e5cbe14222efd0ffdc11f960ea7f538fdce4e27dc297816e521875a655d0" gracePeriod=600 Feb 17 17:57:06 crc kubenswrapper[4680]: I0217 17:57:06.209843 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="b2c9e5cbe14222efd0ffdc11f960ea7f538fdce4e27dc297816e521875a655d0" exitCode=0 Feb 17 17:57:06 crc kubenswrapper[4680]: I0217 17:57:06.209917 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"b2c9e5cbe14222efd0ffdc11f960ea7f538fdce4e27dc297816e521875a655d0"} Feb 17 17:57:06 crc kubenswrapper[4680]: I0217 17:57:06.210441 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"44b711f4401b0e5c2c2810ad8495da08e3e147502d358d88f7938a52612ea1ab"} Feb 17 17:57:06 crc kubenswrapper[4680]: I0217 17:57:06.210470 4680 scope.go:117] "RemoveContainer" containerID="4c4a97f59a07acf950ba570f64f5efedeae0f5884b8819e8f9957b63b9ada29d" Feb 17 17:57:22 crc kubenswrapper[4680]: I0217 17:57:22.400878 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-847c44f7f8-ct2vk" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.137060 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-ntt64"] Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.140074 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.141066 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-w72g5"] Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.141896 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-w72g5" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.146860 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.147211 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.147307 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-c6rh7" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.147577 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.165902 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-frr-conf\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.165958 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-frr-startup\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.165982 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-frr-sockets\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.166031 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-metrics\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.166076 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f56mn\" (UniqueName: \"kubernetes.io/projected/65241e32-d4f0-4771-b88c-97d3750130e9-kube-api-access-f56mn\") pod \"frr-k8s-webhook-server-78b44bf5bb-w72g5\" (UID: \"65241e32-d4f0-4771-b88c-97d3750130e9\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-w72g5" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.166103 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65241e32-d4f0-4771-b88c-97d3750130e9-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-w72g5\" (UID: \"65241e32-d4f0-4771-b88c-97d3750130e9\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-w72g5" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.166164 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-reloader\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.166190 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-metrics-certs\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.166215 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82tnm\" (UniqueName: \"kubernetes.io/projected/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-kube-api-access-82tnm\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.173454 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-w72g5"] Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.250115 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-wjjw2"] Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.251064 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wjjw2" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.254443 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.254622 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.255095 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.261611 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-fhtqz" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.267716 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-metrics\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.267783 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f56mn\" (UniqueName: \"kubernetes.io/projected/65241e32-d4f0-4771-b88c-97d3750130e9-kube-api-access-f56mn\") pod \"frr-k8s-webhook-server-78b44bf5bb-w72g5\" (UID: \"65241e32-d4f0-4771-b88c-97d3750130e9\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-w72g5" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.267819 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65241e32-d4f0-4771-b88c-97d3750130e9-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-w72g5\" (UID: \"65241e32-d4f0-4771-b88c-97d3750130e9\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-w72g5" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.268078 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bbcd402-b03a-45bc-b343-95f81671cb35-metrics-certs\") pod \"speaker-wjjw2\" (UID: \"4bbcd402-b03a-45bc-b343-95f81671cb35\") " pod="metallb-system/speaker-wjjw2" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.268113 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bbcd402-b03a-45bc-b343-95f81671cb35-memberlist\") pod \"speaker-wjjw2\" (UID: \"4bbcd402-b03a-45bc-b343-95f81671cb35\") " pod="metallb-system/speaker-wjjw2" Feb 17 17:57:23 crc kubenswrapper[4680]: E0217 17:57:23.268033 4680 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.268180 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-metrics\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: E0217 17:57:23.268209 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65241e32-d4f0-4771-b88c-97d3750130e9-cert podName:65241e32-d4f0-4771-b88c-97d3750130e9 nodeName:}" failed. No retries permitted until 2026-02-17 17:57:23.768187654 +0000 UTC m=+773.319086333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/65241e32-d4f0-4771-b88c-97d3750130e9-cert") pod "frr-k8s-webhook-server-78b44bf5bb-w72g5" (UID: "65241e32-d4f0-4771-b88c-97d3750130e9") : secret "frr-k8s-webhook-server-cert" not found Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.268138 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-reloader\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.268272 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-metrics-certs\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.268304 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82tnm\" (UniqueName: \"kubernetes.io/projected/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-kube-api-access-82tnm\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.268362 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-frr-conf\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.268369 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-reloader\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.268391 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9v5r\" (UniqueName: \"kubernetes.io/projected/4bbcd402-b03a-45bc-b343-95f81671cb35-kube-api-access-j9v5r\") pod \"speaker-wjjw2\" (UID: \"4bbcd402-b03a-45bc-b343-95f81671cb35\") " pod="metallb-system/speaker-wjjw2" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.268416 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4bbcd402-b03a-45bc-b343-95f81671cb35-metallb-excludel2\") pod \"speaker-wjjw2\" (UID: \"4bbcd402-b03a-45bc-b343-95f81671cb35\") " pod="metallb-system/speaker-wjjw2" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.268443 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-frr-startup\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.268484 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-frr-sockets\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.268934 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-frr-sockets\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.269097 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-frr-conf\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.269512 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-frr-startup\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.273975 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-metrics-certs\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.295268 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f56mn\" (UniqueName: \"kubernetes.io/projected/65241e32-d4f0-4771-b88c-97d3750130e9-kube-api-access-f56mn\") pod \"frr-k8s-webhook-server-78b44bf5bb-w72g5\" (UID: \"65241e32-d4f0-4771-b88c-97d3750130e9\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-w72g5" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.299584 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82tnm\" (UniqueName: \"kubernetes.io/projected/509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd-kube-api-access-82tnm\") pod \"frr-k8s-ntt64\" (UID: \"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd\") " pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.324528 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-fnhv4"] Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.325395 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-fnhv4" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.329848 4680 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.344140 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-fnhv4"] Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.369090 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bbcd402-b03a-45bc-b343-95f81671cb35-metrics-certs\") pod \"speaker-wjjw2\" (UID: \"4bbcd402-b03a-45bc-b343-95f81671cb35\") " pod="metallb-system/speaker-wjjw2" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.369451 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxclz\" (UniqueName: \"kubernetes.io/projected/c0f03e41-ea1c-4120-89b6-13445eeb9fd6-kube-api-access-dxclz\") pod \"controller-69bbfbf88f-fnhv4\" (UID: \"c0f03e41-ea1c-4120-89b6-13445eeb9fd6\") " pod="metallb-system/controller-69bbfbf88f-fnhv4" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.369575 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bbcd402-b03a-45bc-b343-95f81671cb35-memberlist\") pod \"speaker-wjjw2\" (UID: \"4bbcd402-b03a-45bc-b343-95f81671cb35\") " pod="metallb-system/speaker-wjjw2" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.369684 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0f03e41-ea1c-4120-89b6-13445eeb9fd6-metrics-certs\") pod \"controller-69bbfbf88f-fnhv4\" (UID: \"c0f03e41-ea1c-4120-89b6-13445eeb9fd6\") " pod="metallb-system/controller-69bbfbf88f-fnhv4" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.369885 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0f03e41-ea1c-4120-89b6-13445eeb9fd6-cert\") pod \"controller-69bbfbf88f-fnhv4\" (UID: \"c0f03e41-ea1c-4120-89b6-13445eeb9fd6\") " pod="metallb-system/controller-69bbfbf88f-fnhv4" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.369986 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9v5r\" (UniqueName: \"kubernetes.io/projected/4bbcd402-b03a-45bc-b343-95f81671cb35-kube-api-access-j9v5r\") pod \"speaker-wjjw2\" (UID: \"4bbcd402-b03a-45bc-b343-95f81671cb35\") " pod="metallb-system/speaker-wjjw2" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.370103 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4bbcd402-b03a-45bc-b343-95f81671cb35-metallb-excludel2\") pod \"speaker-wjjw2\" (UID: \"4bbcd402-b03a-45bc-b343-95f81671cb35\") " pod="metallb-system/speaker-wjjw2" Feb 17 17:57:23 crc kubenswrapper[4680]: E0217 17:57:23.369415 4680 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 17 17:57:23 crc kubenswrapper[4680]: E0217 17:57:23.371060 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bbcd402-b03a-45bc-b343-95f81671cb35-metrics-certs podName:4bbcd402-b03a-45bc-b343-95f81671cb35 nodeName:}" failed. No retries permitted until 2026-02-17 17:57:23.871046152 +0000 UTC m=+773.421944831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4bbcd402-b03a-45bc-b343-95f81671cb35-metrics-certs") pod "speaker-wjjw2" (UID: "4bbcd402-b03a-45bc-b343-95f81671cb35") : secret "speaker-certs-secret" not found Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.370858 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4bbcd402-b03a-45bc-b343-95f81671cb35-metallb-excludel2\") pod \"speaker-wjjw2\" (UID: \"4bbcd402-b03a-45bc-b343-95f81671cb35\") " pod="metallb-system/speaker-wjjw2" Feb 17 17:57:23 crc kubenswrapper[4680]: E0217 17:57:23.369756 4680 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 17:57:23 crc kubenswrapper[4680]: E0217 17:57:23.371567 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bbcd402-b03a-45bc-b343-95f81671cb35-memberlist podName:4bbcd402-b03a-45bc-b343-95f81671cb35 nodeName:}" failed. No retries permitted until 2026-02-17 17:57:23.871555619 +0000 UTC m=+773.422454298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4bbcd402-b03a-45bc-b343-95f81671cb35-memberlist") pod "speaker-wjjw2" (UID: "4bbcd402-b03a-45bc-b343-95f81671cb35") : secret "metallb-memberlist" not found Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.384780 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9v5r\" (UniqueName: \"kubernetes.io/projected/4bbcd402-b03a-45bc-b343-95f81671cb35-kube-api-access-j9v5r\") pod \"speaker-wjjw2\" (UID: \"4bbcd402-b03a-45bc-b343-95f81671cb35\") " pod="metallb-system/speaker-wjjw2" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.459589 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.472654 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxclz\" (UniqueName: \"kubernetes.io/projected/c0f03e41-ea1c-4120-89b6-13445eeb9fd6-kube-api-access-dxclz\") pod \"controller-69bbfbf88f-fnhv4\" (UID: \"c0f03e41-ea1c-4120-89b6-13445eeb9fd6\") " pod="metallb-system/controller-69bbfbf88f-fnhv4" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.472741 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0f03e41-ea1c-4120-89b6-13445eeb9fd6-metrics-certs\") pod \"controller-69bbfbf88f-fnhv4\" (UID: \"c0f03e41-ea1c-4120-89b6-13445eeb9fd6\") " pod="metallb-system/controller-69bbfbf88f-fnhv4" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.472834 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0f03e41-ea1c-4120-89b6-13445eeb9fd6-cert\") pod \"controller-69bbfbf88f-fnhv4\" (UID: \"c0f03e41-ea1c-4120-89b6-13445eeb9fd6\") " pod="metallb-system/controller-69bbfbf88f-fnhv4" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.480166 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0f03e41-ea1c-4120-89b6-13445eeb9fd6-metrics-certs\") pod \"controller-69bbfbf88f-fnhv4\" (UID: \"c0f03e41-ea1c-4120-89b6-13445eeb9fd6\") " pod="metallb-system/controller-69bbfbf88f-fnhv4" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.484513 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0f03e41-ea1c-4120-89b6-13445eeb9fd6-cert\") pod \"controller-69bbfbf88f-fnhv4\" (UID: \"c0f03e41-ea1c-4120-89b6-13445eeb9fd6\") " pod="metallb-system/controller-69bbfbf88f-fnhv4" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.518697 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxclz\" (UniqueName: \"kubernetes.io/projected/c0f03e41-ea1c-4120-89b6-13445eeb9fd6-kube-api-access-dxclz\") pod \"controller-69bbfbf88f-fnhv4\" (UID: \"c0f03e41-ea1c-4120-89b6-13445eeb9fd6\") " pod="metallb-system/controller-69bbfbf88f-fnhv4" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.647773 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-fnhv4" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.776084 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65241e32-d4f0-4771-b88c-97d3750130e9-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-w72g5\" (UID: \"65241e32-d4f0-4771-b88c-97d3750130e9\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-w72g5" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.782608 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65241e32-d4f0-4771-b88c-97d3750130e9-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-w72g5\" (UID: \"65241e32-d4f0-4771-b88c-97d3750130e9\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-w72g5" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.859126 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-fnhv4"] Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.877175 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bbcd402-b03a-45bc-b343-95f81671cb35-metrics-certs\") pod \"speaker-wjjw2\" (UID: \"4bbcd402-b03a-45bc-b343-95f81671cb35\") " pod="metallb-system/speaker-wjjw2" Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.877373 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bbcd402-b03a-45bc-b343-95f81671cb35-memberlist\") pod \"speaker-wjjw2\" (UID: \"4bbcd402-b03a-45bc-b343-95f81671cb35\") " pod="metallb-system/speaker-wjjw2" Feb 17 17:57:23 crc kubenswrapper[4680]: E0217 17:57:23.877601 4680 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 17:57:23 crc kubenswrapper[4680]: E0217 17:57:23.877687 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bbcd402-b03a-45bc-b343-95f81671cb35-memberlist podName:4bbcd402-b03a-45bc-b343-95f81671cb35 nodeName:}" failed. No retries permitted until 2026-02-17 17:57:24.877665082 +0000 UTC m=+774.428563761 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4bbcd402-b03a-45bc-b343-95f81671cb35-memberlist") pod "speaker-wjjw2" (UID: "4bbcd402-b03a-45bc-b343-95f81671cb35") : secret "metallb-memberlist" not found Feb 17 17:57:23 crc kubenswrapper[4680]: I0217 17:57:23.882068 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bbcd402-b03a-45bc-b343-95f81671cb35-metrics-certs\") pod \"speaker-wjjw2\" (UID: \"4bbcd402-b03a-45bc-b343-95f81671cb35\") " pod="metallb-system/speaker-wjjw2" Feb 17 17:57:24 crc kubenswrapper[4680]: I0217 17:57:24.068795 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-w72g5" Feb 17 17:57:24 crc kubenswrapper[4680]: I0217 17:57:24.273691 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-w72g5"] Feb 17 17:57:24 crc kubenswrapper[4680]: W0217 17:57:24.284116 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65241e32_d4f0_4771_b88c_97d3750130e9.slice/crio-3ef1eec2e1b41a6dd0d5eaf98767cee643bca505fde4c2f430c7093683cb0b33 WatchSource:0}: Error finding container 3ef1eec2e1b41a6dd0d5eaf98767cee643bca505fde4c2f430c7093683cb0b33: Status 404 returned error can't find the container with id 3ef1eec2e1b41a6dd0d5eaf98767cee643bca505fde4c2f430c7093683cb0b33 Feb 17 17:57:24 crc kubenswrapper[4680]: I0217 17:57:24.309149 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ntt64" event={"ID":"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd","Type":"ContainerStarted","Data":"2887ea20a43fe550e10f4119be50093e582638740bedfa59e29d666c4f0881f8"} Feb 17 17:57:24 crc kubenswrapper[4680]: I0217 17:57:24.311345 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-fnhv4" event={"ID":"c0f03e41-ea1c-4120-89b6-13445eeb9fd6","Type":"ContainerStarted","Data":"ca4dd561e9061279fa18abde2517c2c7b93cd20ebea3f7fdc6a82c76d1e88517"} Feb 17 17:57:24 crc kubenswrapper[4680]: I0217 17:57:24.311373 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-fnhv4" event={"ID":"c0f03e41-ea1c-4120-89b6-13445eeb9fd6","Type":"ContainerStarted","Data":"be161e3bee3e5f2b8f6eb4adcd979ef6d96f8d0412ee6264570f485abd2dd477"} Feb 17 17:57:24 crc kubenswrapper[4680]: I0217 17:57:24.311381 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-fnhv4" event={"ID":"c0f03e41-ea1c-4120-89b6-13445eeb9fd6","Type":"ContainerStarted","Data":"866cd6636c093c8895153f4f4ef7fcc7ca162b7407930cd02c7e8ab496baecea"} Feb 17 17:57:24 crc kubenswrapper[4680]: I0217 17:57:24.312101 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-fnhv4" Feb 17 17:57:24 crc kubenswrapper[4680]: I0217 17:57:24.313849 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-w72g5" event={"ID":"65241e32-d4f0-4771-b88c-97d3750130e9","Type":"ContainerStarted","Data":"3ef1eec2e1b41a6dd0d5eaf98767cee643bca505fde4c2f430c7093683cb0b33"} Feb 17 17:57:24 crc kubenswrapper[4680]: I0217 17:57:24.328604 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-fnhv4" podStartSLOduration=1.328588962 podStartE2EDuration="1.328588962s" podCreationTimestamp="2026-02-17 17:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:57:24.324762509 +0000 UTC m=+773.875661188" watchObservedRunningTime="2026-02-17 17:57:24.328588962 +0000 UTC m=+773.879487641" Feb 17 17:57:24 crc kubenswrapper[4680]: I0217 17:57:24.887740 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bbcd402-b03a-45bc-b343-95f81671cb35-memberlist\") pod \"speaker-wjjw2\" (UID: \"4bbcd402-b03a-45bc-b343-95f81671cb35\") " pod="metallb-system/speaker-wjjw2" Feb 17 17:57:24 crc kubenswrapper[4680]: I0217 17:57:24.896520 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bbcd402-b03a-45bc-b343-95f81671cb35-memberlist\") pod \"speaker-wjjw2\" (UID: \"4bbcd402-b03a-45bc-b343-95f81671cb35\") " pod="metallb-system/speaker-wjjw2" Feb 17 17:57:25 crc kubenswrapper[4680]: I0217 17:57:25.064059 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wjjw2" Feb 17 17:57:25 crc kubenswrapper[4680]: I0217 17:57:25.324532 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wjjw2" event={"ID":"4bbcd402-b03a-45bc-b343-95f81671cb35","Type":"ContainerStarted","Data":"6d800232685033cff5a2a7d8c1f2767b54cfba692af436ece80d1e1e24123661"} Feb 17 17:57:26 crc kubenswrapper[4680]: I0217 17:57:26.336651 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wjjw2" event={"ID":"4bbcd402-b03a-45bc-b343-95f81671cb35","Type":"ContainerStarted","Data":"7585f8a2069ab63d5b018cff81cc7f7ef2b0146d062dbd820fa79f27e23f1532"} Feb 17 17:57:26 crc kubenswrapper[4680]: I0217 17:57:26.336960 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wjjw2" event={"ID":"4bbcd402-b03a-45bc-b343-95f81671cb35","Type":"ContainerStarted","Data":"156592af7ca75eee8a67e730627fcd89843d305c7046315e9e68e3256c1f9f61"} Feb 17 17:57:26 crc kubenswrapper[4680]: I0217 17:57:26.336986 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wjjw2" Feb 17 17:57:26 crc kubenswrapper[4680]: I0217 17:57:26.371768 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-wjjw2" podStartSLOduration=3.371751726 podStartE2EDuration="3.371751726s" podCreationTimestamp="2026-02-17 17:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:57:26.369536565 +0000 UTC m=+775.920435244" watchObservedRunningTime="2026-02-17 17:57:26.371751726 +0000 UTC m=+775.922650405" Feb 17 17:57:32 crc kubenswrapper[4680]: I0217 17:57:32.386190 4680 generic.go:334] "Generic (PLEG): container finished" podID="509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd" containerID="e58f4c56c30166a5fe7296373adc17c1a7b5f699ee7354c631cdc6277cc66153" exitCode=0 Feb 17 17:57:32 crc kubenswrapper[4680]: I0217 17:57:32.386307 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ntt64" event={"ID":"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd","Type":"ContainerDied","Data":"e58f4c56c30166a5fe7296373adc17c1a7b5f699ee7354c631cdc6277cc66153"} Feb 17 17:57:32 crc kubenswrapper[4680]: I0217 17:57:32.388559 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-w72g5" event={"ID":"65241e32-d4f0-4771-b88c-97d3750130e9","Type":"ContainerStarted","Data":"36f131a79ceac89fca0aad0239c09dd4fcb152913ffd736a49e08ae2be3f6ee7"} Feb 17 17:57:32 crc kubenswrapper[4680]: I0217 17:57:32.388666 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-w72g5" Feb 17 17:57:33 crc kubenswrapper[4680]: I0217 17:57:33.394921 4680 generic.go:334] "Generic (PLEG): container finished" podID="509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd" containerID="a13021ee59b03d5e3472b9c590d6b58047f0718c0ca1870e1a525083d9a2f99b" exitCode=0 Feb 17 17:57:33 crc kubenswrapper[4680]: I0217 17:57:33.395038 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ntt64" event={"ID":"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd","Type":"ContainerDied","Data":"a13021ee59b03d5e3472b9c590d6b58047f0718c0ca1870e1a525083d9a2f99b"} Feb 17 17:57:33 crc kubenswrapper[4680]: I0217 17:57:33.420949 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-w72g5" podStartSLOduration=2.597911936 podStartE2EDuration="10.420927817s" podCreationTimestamp="2026-02-17 17:57:23 +0000 UTC" firstStartedPulling="2026-02-17 17:57:24.28723908 +0000 UTC m=+773.838137759" lastFinishedPulling="2026-02-17 17:57:32.110254961 +0000 UTC m=+781.661153640" observedRunningTime="2026-02-17 17:57:32.429249595 +0000 UTC m=+781.980148274" watchObservedRunningTime="2026-02-17 17:57:33.420927817 +0000 UTC m=+782.971826496" Feb 17 17:57:34 crc kubenswrapper[4680]: I0217 17:57:34.404010 4680 generic.go:334] "Generic (PLEG): container finished" podID="509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd" containerID="284c318f4a725fe235f614b74d6477e6c1fc1467db37883bd8ec5ea03b284dc0" exitCode=0 Feb 17 17:57:34 crc kubenswrapper[4680]: I0217 17:57:34.404057 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ntt64" event={"ID":"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd","Type":"ContainerDied","Data":"284c318f4a725fe235f614b74d6477e6c1fc1467db37883bd8ec5ea03b284dc0"} Feb 17 17:57:35 crc kubenswrapper[4680]: I0217 17:57:35.067832 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wjjw2" Feb 17 17:57:35 crc kubenswrapper[4680]: I0217 17:57:35.414752 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ntt64" event={"ID":"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd","Type":"ContainerStarted","Data":"25652daf7ee80d6b287dc7a7f9b5aba1b0e1303df71727e746bc31b9bd60535e"} Feb 17 17:57:35 crc kubenswrapper[4680]: I0217 17:57:35.414792 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ntt64" event={"ID":"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd","Type":"ContainerStarted","Data":"1c451765b73da40ae4e43ac69875d6cdcee363ec08597fd5adbec34d2ec61695"} Feb 17 17:57:35 crc kubenswrapper[4680]: I0217 17:57:35.414802 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ntt64" event={"ID":"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd","Type":"ContainerStarted","Data":"922cc179fbbfd74a6a05dfff013fd6f1db33d54eff30f9134b1bf1e941fa27b2"} Feb 17 17:57:35 crc kubenswrapper[4680]: I0217 17:57:35.414811 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ntt64" event={"ID":"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd","Type":"ContainerStarted","Data":"25050a0af24854841cc275f807bf46d212f082ba82f96df30dc9983f82c0caf5"} Feb 17 17:57:35 crc kubenswrapper[4680]: I0217 17:57:35.414820 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ntt64" event={"ID":"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd","Type":"ContainerStarted","Data":"c9ee222ddf5844ad847b8b472892698459511090f520a159be2c219d15db716e"} Feb 17 17:57:36 crc kubenswrapper[4680]: I0217 17:57:36.423303 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ntt64" event={"ID":"509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd","Type":"ContainerStarted","Data":"4dce1b9096621ea973c0ef34ab20350300ae443a2cbf24f243e5940b88098d34"} Feb 17 17:57:36 crc kubenswrapper[4680]: I0217 17:57:36.423625 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:36 crc kubenswrapper[4680]: I0217 17:57:36.443745 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-ntt64" podStartSLOduration=4.944895941 podStartE2EDuration="13.443725151s" podCreationTimestamp="2026-02-17 17:57:23 +0000 UTC" firstStartedPulling="2026-02-17 17:57:23.589607391 +0000 UTC m=+773.140506070" lastFinishedPulling="2026-02-17 17:57:32.088436601 +0000 UTC m=+781.639335280" observedRunningTime="2026-02-17 17:57:36.440394666 +0000 UTC m=+785.991293375" watchObservedRunningTime="2026-02-17 17:57:36.443725151 +0000 UTC m=+785.994623830" Feb 17 17:57:37 crc kubenswrapper[4680]: I0217 17:57:37.697441 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-jl7pj"] Feb 17 17:57:37 crc kubenswrapper[4680]: I0217 17:57:37.698811 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jl7pj" Feb 17 17:57:37 crc kubenswrapper[4680]: I0217 17:57:37.702553 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 17 17:57:37 crc kubenswrapper[4680]: I0217 17:57:37.703041 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-q9jjx" Feb 17 17:57:37 crc kubenswrapper[4680]: I0217 17:57:37.706614 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jl7pj"] Feb 17 17:57:37 crc kubenswrapper[4680]: I0217 17:57:37.707744 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 17 17:57:37 crc kubenswrapper[4680]: I0217 17:57:37.861845 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgw29\" (UniqueName: \"kubernetes.io/projected/726358ca-de4d-4558-a233-7456d8a51276-kube-api-access-rgw29\") pod \"openstack-operator-index-jl7pj\" (UID: \"726358ca-de4d-4558-a233-7456d8a51276\") " pod="openstack-operators/openstack-operator-index-jl7pj" Feb 17 17:57:37 crc kubenswrapper[4680]: I0217 17:57:37.963458 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgw29\" (UniqueName: \"kubernetes.io/projected/726358ca-de4d-4558-a233-7456d8a51276-kube-api-access-rgw29\") pod \"openstack-operator-index-jl7pj\" (UID: \"726358ca-de4d-4558-a233-7456d8a51276\") " pod="openstack-operators/openstack-operator-index-jl7pj" Feb 17 17:57:37 crc kubenswrapper[4680]: I0217 17:57:37.995121 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgw29\" (UniqueName: \"kubernetes.io/projected/726358ca-de4d-4558-a233-7456d8a51276-kube-api-access-rgw29\") pod \"openstack-operator-index-jl7pj\" (UID: \"726358ca-de4d-4558-a233-7456d8a51276\") " pod="openstack-operators/openstack-operator-index-jl7pj" Feb 17 17:57:38 crc kubenswrapper[4680]: I0217 17:57:38.018822 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jl7pj" Feb 17 17:57:38 crc kubenswrapper[4680]: I0217 17:57:38.460282 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:38 crc kubenswrapper[4680]: I0217 17:57:38.478646 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jl7pj"] Feb 17 17:57:38 crc kubenswrapper[4680]: I0217 17:57:38.512678 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:39 crc kubenswrapper[4680]: I0217 17:57:39.472590 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jl7pj" event={"ID":"726358ca-de4d-4558-a233-7456d8a51276","Type":"ContainerStarted","Data":"be510225685380745f464a4b920263ded63723db84be5686f5b7da7e1fddc5c2"} Feb 17 17:57:40 crc kubenswrapper[4680]: I0217 17:57:40.277700 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-jl7pj"] Feb 17 17:57:40 crc kubenswrapper[4680]: I0217 17:57:40.884814 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cmvdn"] Feb 17 17:57:40 crc kubenswrapper[4680]: I0217 17:57:40.885710 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cmvdn" Feb 17 17:57:40 crc kubenswrapper[4680]: I0217 17:57:40.896943 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cmvdn"] Feb 17 17:57:41 crc kubenswrapper[4680]: I0217 17:57:41.010788 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qtzl\" (UniqueName: \"kubernetes.io/projected/33ad8a07-6bf1-4c03-a44f-5462b01d5a82-kube-api-access-9qtzl\") pod \"openstack-operator-index-cmvdn\" (UID: \"33ad8a07-6bf1-4c03-a44f-5462b01d5a82\") " pod="openstack-operators/openstack-operator-index-cmvdn" Feb 17 17:57:41 crc kubenswrapper[4680]: I0217 17:57:41.112597 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qtzl\" (UniqueName: \"kubernetes.io/projected/33ad8a07-6bf1-4c03-a44f-5462b01d5a82-kube-api-access-9qtzl\") pod \"openstack-operator-index-cmvdn\" (UID: \"33ad8a07-6bf1-4c03-a44f-5462b01d5a82\") " pod="openstack-operators/openstack-operator-index-cmvdn" Feb 17 17:57:41 crc kubenswrapper[4680]: I0217 17:57:41.140010 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qtzl\" (UniqueName: \"kubernetes.io/projected/33ad8a07-6bf1-4c03-a44f-5462b01d5a82-kube-api-access-9qtzl\") pod \"openstack-operator-index-cmvdn\" (UID: \"33ad8a07-6bf1-4c03-a44f-5462b01d5a82\") " pod="openstack-operators/openstack-operator-index-cmvdn" Feb 17 17:57:41 crc kubenswrapper[4680]: I0217 17:57:41.213501 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cmvdn" Feb 17 17:57:41 crc kubenswrapper[4680]: I0217 17:57:41.488897 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jl7pj" event={"ID":"726358ca-de4d-4558-a233-7456d8a51276","Type":"ContainerStarted","Data":"a614e008c5e5315902f7bf974d89e62ac73fef91ddbfa54f05c7f2f1e054eff9"} Feb 17 17:57:41 crc kubenswrapper[4680]: I0217 17:57:41.489108 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-jl7pj" podUID="726358ca-de4d-4558-a233-7456d8a51276" containerName="registry-server" containerID="cri-o://a614e008c5e5315902f7bf974d89e62ac73fef91ddbfa54f05c7f2f1e054eff9" gracePeriod=2 Feb 17 17:57:41 crc kubenswrapper[4680]: I0217 17:57:41.508425 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-jl7pj" podStartSLOduration=2.124874765 podStartE2EDuration="4.50838752s" podCreationTimestamp="2026-02-17 17:57:37 +0000 UTC" firstStartedPulling="2026-02-17 17:57:38.479350248 +0000 UTC m=+788.030248927" lastFinishedPulling="2026-02-17 17:57:40.862863003 +0000 UTC m=+790.413761682" observedRunningTime="2026-02-17 17:57:41.504427616 +0000 UTC m=+791.055326295" watchObservedRunningTime="2026-02-17 17:57:41.50838752 +0000 UTC m=+791.059286209" Feb 17 17:57:41 crc kubenswrapper[4680]: I0217 17:57:41.602977 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cmvdn"] Feb 17 17:57:41 crc kubenswrapper[4680]: W0217 17:57:41.613169 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33ad8a07_6bf1_4c03_a44f_5462b01d5a82.slice/crio-41b4729848715f5e717c3965aaf3131269f515db1b0447a5348a6e6ff88472d1 WatchSource:0}: Error finding container 41b4729848715f5e717c3965aaf3131269f515db1b0447a5348a6e6ff88472d1: Status 404 returned error can't find the container with id 41b4729848715f5e717c3965aaf3131269f515db1b0447a5348a6e6ff88472d1 Feb 17 17:57:41 crc kubenswrapper[4680]: I0217 17:57:41.878359 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jl7pj" Feb 17 17:57:42 crc kubenswrapper[4680]: I0217 17:57:42.037114 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgw29\" (UniqueName: \"kubernetes.io/projected/726358ca-de4d-4558-a233-7456d8a51276-kube-api-access-rgw29\") pod \"726358ca-de4d-4558-a233-7456d8a51276\" (UID: \"726358ca-de4d-4558-a233-7456d8a51276\") " Feb 17 17:57:42 crc kubenswrapper[4680]: I0217 17:57:42.042048 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/726358ca-de4d-4558-a233-7456d8a51276-kube-api-access-rgw29" (OuterVolumeSpecName: "kube-api-access-rgw29") pod "726358ca-de4d-4558-a233-7456d8a51276" (UID: "726358ca-de4d-4558-a233-7456d8a51276"). InnerVolumeSpecName "kube-api-access-rgw29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:57:42 crc kubenswrapper[4680]: I0217 17:57:42.138998 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgw29\" (UniqueName: \"kubernetes.io/projected/726358ca-de4d-4558-a233-7456d8a51276-kube-api-access-rgw29\") on node \"crc\" DevicePath \"\"" Feb 17 17:57:42 crc kubenswrapper[4680]: I0217 17:57:42.495484 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cmvdn" event={"ID":"33ad8a07-6bf1-4c03-a44f-5462b01d5a82","Type":"ContainerStarted","Data":"e7db6963f9c6408a910a484784941f92bd557d18597ee8e67ca97cd748cb1512"} Feb 17 17:57:42 crc kubenswrapper[4680]: I0217 17:57:42.495530 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cmvdn" event={"ID":"33ad8a07-6bf1-4c03-a44f-5462b01d5a82","Type":"ContainerStarted","Data":"41b4729848715f5e717c3965aaf3131269f515db1b0447a5348a6e6ff88472d1"} Feb 17 17:57:42 crc kubenswrapper[4680]: I0217 17:57:42.497110 4680 generic.go:334] "Generic (PLEG): container finished" podID="726358ca-de4d-4558-a233-7456d8a51276" containerID="a614e008c5e5315902f7bf974d89e62ac73fef91ddbfa54f05c7f2f1e054eff9" exitCode=0 Feb 17 17:57:42 crc kubenswrapper[4680]: I0217 17:57:42.497170 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jl7pj" event={"ID":"726358ca-de4d-4558-a233-7456d8a51276","Type":"ContainerDied","Data":"a614e008c5e5315902f7bf974d89e62ac73fef91ddbfa54f05c7f2f1e054eff9"} Feb 17 17:57:42 crc kubenswrapper[4680]: I0217 17:57:42.497204 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jl7pj" event={"ID":"726358ca-de4d-4558-a233-7456d8a51276","Type":"ContainerDied","Data":"be510225685380745f464a4b920263ded63723db84be5686f5b7da7e1fddc5c2"} Feb 17 17:57:42 crc kubenswrapper[4680]: I0217 17:57:42.497212 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jl7pj" Feb 17 17:57:42 crc kubenswrapper[4680]: I0217 17:57:42.497225 4680 scope.go:117] "RemoveContainer" containerID="a614e008c5e5315902f7bf974d89e62ac73fef91ddbfa54f05c7f2f1e054eff9" Feb 17 17:57:42 crc kubenswrapper[4680]: I0217 17:57:42.513566 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cmvdn" podStartSLOduration=2.448716387 podStartE2EDuration="2.513547578s" podCreationTimestamp="2026-02-17 17:57:40 +0000 UTC" firstStartedPulling="2026-02-17 17:57:41.619150896 +0000 UTC m=+791.170049575" lastFinishedPulling="2026-02-17 17:57:41.683982087 +0000 UTC m=+791.234880766" observedRunningTime="2026-02-17 17:57:42.511597087 +0000 UTC m=+792.062495776" watchObservedRunningTime="2026-02-17 17:57:42.513547578 +0000 UTC m=+792.064446257" Feb 17 17:57:42 crc kubenswrapper[4680]: I0217 17:57:42.525492 4680 scope.go:117] "RemoveContainer" containerID="a614e008c5e5315902f7bf974d89e62ac73fef91ddbfa54f05c7f2f1e054eff9" Feb 17 17:57:42 crc kubenswrapper[4680]: E0217 17:57:42.528764 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a614e008c5e5315902f7bf974d89e62ac73fef91ddbfa54f05c7f2f1e054eff9\": container with ID starting with a614e008c5e5315902f7bf974d89e62ac73fef91ddbfa54f05c7f2f1e054eff9 not found: ID does not exist" containerID="a614e008c5e5315902f7bf974d89e62ac73fef91ddbfa54f05c7f2f1e054eff9" Feb 17 17:57:42 crc kubenswrapper[4680]: I0217 17:57:42.528817 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a614e008c5e5315902f7bf974d89e62ac73fef91ddbfa54f05c7f2f1e054eff9"} err="failed to get container status \"a614e008c5e5315902f7bf974d89e62ac73fef91ddbfa54f05c7f2f1e054eff9\": rpc error: code = NotFound desc = could not find container \"a614e008c5e5315902f7bf974d89e62ac73fef91ddbfa54f05c7f2f1e054eff9\": container with ID starting with a614e008c5e5315902f7bf974d89e62ac73fef91ddbfa54f05c7f2f1e054eff9 not found: ID does not exist" Feb 17 17:57:42 crc kubenswrapper[4680]: I0217 17:57:42.531480 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-jl7pj"] Feb 17 17:57:42 crc kubenswrapper[4680]: I0217 17:57:42.537103 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-jl7pj"] Feb 17 17:57:43 crc kubenswrapper[4680]: I0217 17:57:43.112965 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="726358ca-de4d-4558-a233-7456d8a51276" path="/var/lib/kubelet/pods/726358ca-de4d-4558-a233-7456d8a51276/volumes" Feb 17 17:57:43 crc kubenswrapper[4680]: I0217 17:57:43.652233 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-fnhv4" Feb 17 17:57:44 crc kubenswrapper[4680]: I0217 17:57:44.072461 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-w72g5" Feb 17 17:57:51 crc kubenswrapper[4680]: I0217 17:57:51.214365 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cmvdn" Feb 17 17:57:51 crc kubenswrapper[4680]: I0217 17:57:51.215456 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cmvdn" Feb 17 17:57:51 crc kubenswrapper[4680]: I0217 17:57:51.236035 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cmvdn" Feb 17 17:57:51 crc kubenswrapper[4680]: I0217 17:57:51.565174 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cmvdn" Feb 17 17:57:53 crc kubenswrapper[4680]: I0217 17:57:53.464388 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-ntt64" Feb 17 17:57:59 crc kubenswrapper[4680]: I0217 17:57:59.851996 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr"] Feb 17 17:57:59 crc kubenswrapper[4680]: E0217 17:57:59.852841 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726358ca-de4d-4558-a233-7456d8a51276" containerName="registry-server" Feb 17 17:57:59 crc kubenswrapper[4680]: I0217 17:57:59.852854 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="726358ca-de4d-4558-a233-7456d8a51276" containerName="registry-server" Feb 17 17:57:59 crc kubenswrapper[4680]: I0217 17:57:59.852989 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="726358ca-de4d-4558-a233-7456d8a51276" containerName="registry-server" Feb 17 17:57:59 crc kubenswrapper[4680]: I0217 17:57:59.854892 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" Feb 17 17:57:59 crc kubenswrapper[4680]: I0217 17:57:59.861618 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr"] Feb 17 17:57:59 crc kubenswrapper[4680]: I0217 17:57:59.863400 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-dqvb8" Feb 17 17:57:59 crc kubenswrapper[4680]: I0217 17:57:59.873025 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad62ce23-e261-4066-9465-29efa5b1eb6d-util\") pod \"6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr\" (UID: \"ad62ce23-e261-4066-9465-29efa5b1eb6d\") " pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" Feb 17 17:57:59 crc kubenswrapper[4680]: I0217 17:57:59.873098 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad62ce23-e261-4066-9465-29efa5b1eb6d-bundle\") pod \"6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr\" (UID: \"ad62ce23-e261-4066-9465-29efa5b1eb6d\") " pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" Feb 17 17:57:59 crc kubenswrapper[4680]: I0217 17:57:59.873262 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qcj6\" (UniqueName: \"kubernetes.io/projected/ad62ce23-e261-4066-9465-29efa5b1eb6d-kube-api-access-9qcj6\") pod \"6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr\" (UID: \"ad62ce23-e261-4066-9465-29efa5b1eb6d\") " pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" Feb 17 17:57:59 crc kubenswrapper[4680]: I0217 17:57:59.974549 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad62ce23-e261-4066-9465-29efa5b1eb6d-util\") pod \"6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr\" (UID: \"ad62ce23-e261-4066-9465-29efa5b1eb6d\") " pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" Feb 17 17:57:59 crc kubenswrapper[4680]: I0217 17:57:59.974614 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad62ce23-e261-4066-9465-29efa5b1eb6d-bundle\") pod \"6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr\" (UID: \"ad62ce23-e261-4066-9465-29efa5b1eb6d\") " pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" Feb 17 17:57:59 crc kubenswrapper[4680]: I0217 17:57:59.974671 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qcj6\" (UniqueName: \"kubernetes.io/projected/ad62ce23-e261-4066-9465-29efa5b1eb6d-kube-api-access-9qcj6\") pod \"6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr\" (UID: \"ad62ce23-e261-4066-9465-29efa5b1eb6d\") " pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" Feb 17 17:57:59 crc kubenswrapper[4680]: I0217 17:57:59.975001 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad62ce23-e261-4066-9465-29efa5b1eb6d-util\") pod \"6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr\" (UID: \"ad62ce23-e261-4066-9465-29efa5b1eb6d\") " pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" Feb 17 17:57:59 crc kubenswrapper[4680]: I0217 17:57:59.975012 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad62ce23-e261-4066-9465-29efa5b1eb6d-bundle\") pod \"6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr\" (UID: \"ad62ce23-e261-4066-9465-29efa5b1eb6d\") " pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" Feb 17 17:57:59 crc kubenswrapper[4680]: I0217 17:57:59.994161 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qcj6\" (UniqueName: \"kubernetes.io/projected/ad62ce23-e261-4066-9465-29efa5b1eb6d-kube-api-access-9qcj6\") pod \"6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr\" (UID: \"ad62ce23-e261-4066-9465-29efa5b1eb6d\") " pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" Feb 17 17:58:00 crc kubenswrapper[4680]: I0217 17:58:00.171170 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" Feb 17 17:58:00 crc kubenswrapper[4680]: I0217 17:58:00.372525 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr"] Feb 17 17:58:00 crc kubenswrapper[4680]: I0217 17:58:00.594418 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" event={"ID":"ad62ce23-e261-4066-9465-29efa5b1eb6d","Type":"ContainerStarted","Data":"adc94ce1313146bfba85649d89c396fc2834b345b590011e0fc29221a470d27d"} Feb 17 17:58:01 crc kubenswrapper[4680]: I0217 17:58:01.601368 4680 generic.go:334] "Generic (PLEG): container finished" podID="ad62ce23-e261-4066-9465-29efa5b1eb6d" containerID="dde29c44e7efbdf8a3301c00a748891f2849de0bf0f0608868bdfc6393c284c3" exitCode=0 Feb 17 17:58:01 crc kubenswrapper[4680]: I0217 17:58:01.601418 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" event={"ID":"ad62ce23-e261-4066-9465-29efa5b1eb6d","Type":"ContainerDied","Data":"dde29c44e7efbdf8a3301c00a748891f2849de0bf0f0608868bdfc6393c284c3"} Feb 17 17:58:02 crc kubenswrapper[4680]: I0217 17:58:02.609050 4680 generic.go:334] "Generic (PLEG): container finished" podID="ad62ce23-e261-4066-9465-29efa5b1eb6d" containerID="6c5ac1919914d14e6e526ee187a932f805346cacd27d9475bde09f609a7e8270" exitCode=0 Feb 17 17:58:02 crc kubenswrapper[4680]: I0217 17:58:02.609087 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" event={"ID":"ad62ce23-e261-4066-9465-29efa5b1eb6d","Type":"ContainerDied","Data":"6c5ac1919914d14e6e526ee187a932f805346cacd27d9475bde09f609a7e8270"} Feb 17 17:58:03 crc kubenswrapper[4680]: I0217 17:58:03.617487 4680 generic.go:334] "Generic (PLEG): container finished" podID="ad62ce23-e261-4066-9465-29efa5b1eb6d" containerID="d9b4089b2de0db1b5db5a3f723aed23c7f8c2084789fd94d02b959e9fd75d5f8" exitCode=0 Feb 17 17:58:03 crc kubenswrapper[4680]: I0217 17:58:03.617643 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" event={"ID":"ad62ce23-e261-4066-9465-29efa5b1eb6d","Type":"ContainerDied","Data":"d9b4089b2de0db1b5db5a3f723aed23c7f8c2084789fd94d02b959e9fd75d5f8"} Feb 17 17:58:04 crc kubenswrapper[4680]: I0217 17:58:04.849506 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" Feb 17 17:58:04 crc kubenswrapper[4680]: I0217 17:58:04.935776 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad62ce23-e261-4066-9465-29efa5b1eb6d-util\") pod \"ad62ce23-e261-4066-9465-29efa5b1eb6d\" (UID: \"ad62ce23-e261-4066-9465-29efa5b1eb6d\") " Feb 17 17:58:04 crc kubenswrapper[4680]: I0217 17:58:04.935862 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad62ce23-e261-4066-9465-29efa5b1eb6d-bundle\") pod \"ad62ce23-e261-4066-9465-29efa5b1eb6d\" (UID: \"ad62ce23-e261-4066-9465-29efa5b1eb6d\") " Feb 17 17:58:04 crc kubenswrapper[4680]: I0217 17:58:04.935884 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qcj6\" (UniqueName: \"kubernetes.io/projected/ad62ce23-e261-4066-9465-29efa5b1eb6d-kube-api-access-9qcj6\") pod \"ad62ce23-e261-4066-9465-29efa5b1eb6d\" (UID: \"ad62ce23-e261-4066-9465-29efa5b1eb6d\") " Feb 17 17:58:04 crc kubenswrapper[4680]: I0217 17:58:04.936817 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad62ce23-e261-4066-9465-29efa5b1eb6d-bundle" (OuterVolumeSpecName: "bundle") pod "ad62ce23-e261-4066-9465-29efa5b1eb6d" (UID: "ad62ce23-e261-4066-9465-29efa5b1eb6d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:58:04 crc kubenswrapper[4680]: I0217 17:58:04.952871 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad62ce23-e261-4066-9465-29efa5b1eb6d-kube-api-access-9qcj6" (OuterVolumeSpecName: "kube-api-access-9qcj6") pod "ad62ce23-e261-4066-9465-29efa5b1eb6d" (UID: "ad62ce23-e261-4066-9465-29efa5b1eb6d"). InnerVolumeSpecName "kube-api-access-9qcj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:58:04 crc kubenswrapper[4680]: I0217 17:58:04.953903 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad62ce23-e261-4066-9465-29efa5b1eb6d-util" (OuterVolumeSpecName: "util") pod "ad62ce23-e261-4066-9465-29efa5b1eb6d" (UID: "ad62ce23-e261-4066-9465-29efa5b1eb6d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:58:05 crc kubenswrapper[4680]: I0217 17:58:05.037188 4680 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad62ce23-e261-4066-9465-29efa5b1eb6d-util\") on node \"crc\" DevicePath \"\"" Feb 17 17:58:05 crc kubenswrapper[4680]: I0217 17:58:05.037222 4680 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad62ce23-e261-4066-9465-29efa5b1eb6d-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:58:05 crc kubenswrapper[4680]: I0217 17:58:05.037232 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qcj6\" (UniqueName: \"kubernetes.io/projected/ad62ce23-e261-4066-9465-29efa5b1eb6d-kube-api-access-9qcj6\") on node \"crc\" DevicePath \"\"" Feb 17 17:58:05 crc kubenswrapper[4680]: I0217 17:58:05.629742 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" event={"ID":"ad62ce23-e261-4066-9465-29efa5b1eb6d","Type":"ContainerDied","Data":"adc94ce1313146bfba85649d89c396fc2834b345b590011e0fc29221a470d27d"} Feb 17 17:58:05 crc kubenswrapper[4680]: I0217 17:58:05.629793 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adc94ce1313146bfba85649d89c396fc2834b345b590011e0fc29221a470d27d" Feb 17 17:58:05 crc kubenswrapper[4680]: I0217 17:58:05.629894 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr" Feb 17 17:58:12 crc kubenswrapper[4680]: I0217 17:58:12.453520 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5498fb9db9-fg6q4"] Feb 17 17:58:12 crc kubenswrapper[4680]: E0217 17:58:12.454219 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad62ce23-e261-4066-9465-29efa5b1eb6d" containerName="extract" Feb 17 17:58:12 crc kubenswrapper[4680]: I0217 17:58:12.454231 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad62ce23-e261-4066-9465-29efa5b1eb6d" containerName="extract" Feb 17 17:58:12 crc kubenswrapper[4680]: E0217 17:58:12.454242 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad62ce23-e261-4066-9465-29efa5b1eb6d" containerName="util" Feb 17 17:58:12 crc kubenswrapper[4680]: I0217 17:58:12.454247 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad62ce23-e261-4066-9465-29efa5b1eb6d" containerName="util" Feb 17 17:58:12 crc kubenswrapper[4680]: E0217 17:58:12.454253 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad62ce23-e261-4066-9465-29efa5b1eb6d" containerName="pull" Feb 17 17:58:12 crc kubenswrapper[4680]: I0217 17:58:12.454259 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad62ce23-e261-4066-9465-29efa5b1eb6d" containerName="pull" Feb 17 17:58:12 crc kubenswrapper[4680]: I0217 17:58:12.454399 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad62ce23-e261-4066-9465-29efa5b1eb6d" containerName="extract" Feb 17 17:58:12 crc kubenswrapper[4680]: I0217 17:58:12.454791 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5498fb9db9-fg6q4" Feb 17 17:58:12 crc kubenswrapper[4680]: I0217 17:58:12.456399 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-sz82w" Feb 17 17:58:12 crc kubenswrapper[4680]: I0217 17:58:12.541831 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5498fb9db9-fg6q4"] Feb 17 17:58:12 crc kubenswrapper[4680]: I0217 17:58:12.626426 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sc77\" (UniqueName: \"kubernetes.io/projected/c343adc1-0596-4313-9d07-e7a140130045-kube-api-access-5sc77\") pod \"openstack-operator-controller-init-5498fb9db9-fg6q4\" (UID: \"c343adc1-0596-4313-9d07-e7a140130045\") " pod="openstack-operators/openstack-operator-controller-init-5498fb9db9-fg6q4" Feb 17 17:58:12 crc kubenswrapper[4680]: I0217 17:58:12.727633 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sc77\" (UniqueName: \"kubernetes.io/projected/c343adc1-0596-4313-9d07-e7a140130045-kube-api-access-5sc77\") pod \"openstack-operator-controller-init-5498fb9db9-fg6q4\" (UID: \"c343adc1-0596-4313-9d07-e7a140130045\") " pod="openstack-operators/openstack-operator-controller-init-5498fb9db9-fg6q4" Feb 17 17:58:12 crc kubenswrapper[4680]: I0217 17:58:12.746257 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sc77\" (UniqueName: \"kubernetes.io/projected/c343adc1-0596-4313-9d07-e7a140130045-kube-api-access-5sc77\") pod \"openstack-operator-controller-init-5498fb9db9-fg6q4\" (UID: \"c343adc1-0596-4313-9d07-e7a140130045\") " pod="openstack-operators/openstack-operator-controller-init-5498fb9db9-fg6q4" Feb 17 17:58:12 crc kubenswrapper[4680]: I0217 17:58:12.771020 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5498fb9db9-fg6q4" Feb 17 17:58:12 crc kubenswrapper[4680]: I0217 17:58:12.994008 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5498fb9db9-fg6q4"] Feb 17 17:58:13 crc kubenswrapper[4680]: I0217 17:58:13.681056 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5498fb9db9-fg6q4" event={"ID":"c343adc1-0596-4313-9d07-e7a140130045","Type":"ContainerStarted","Data":"6fb236223a4ecac49f2b22afaf552bdcb2b967021c824d793a778659086ab6bc"} Feb 17 17:58:18 crc kubenswrapper[4680]: I0217 17:58:18.719762 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5498fb9db9-fg6q4" event={"ID":"c343adc1-0596-4313-9d07-e7a140130045","Type":"ContainerStarted","Data":"61e6fd874ca881a400193affc4bb8e23ca4c81f9624fbc67bfadad0d926d1ab8"} Feb 17 17:58:18 crc kubenswrapper[4680]: I0217 17:58:18.720270 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5498fb9db9-fg6q4" Feb 17 17:58:18 crc kubenswrapper[4680]: I0217 17:58:18.762900 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5498fb9db9-fg6q4" podStartSLOduration=1.6357686569999998 podStartE2EDuration="6.762883605s" podCreationTimestamp="2026-02-17 17:58:12 +0000 UTC" firstStartedPulling="2026-02-17 17:58:13.005577485 +0000 UTC m=+822.556476164" lastFinishedPulling="2026-02-17 17:58:18.132692433 +0000 UTC m=+827.683591112" observedRunningTime="2026-02-17 17:58:18.757487104 +0000 UTC m=+828.308385793" watchObservedRunningTime="2026-02-17 17:58:18.762883605 +0000 UTC m=+828.313782284" Feb 17 17:58:24 crc kubenswrapper[4680]: I0217 17:58:24.109921 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4fdmc"] Feb 17 17:58:24 crc kubenswrapper[4680]: I0217 17:58:24.111978 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fdmc" Feb 17 17:58:24 crc kubenswrapper[4680]: I0217 17:58:24.124244 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4fdmc"] Feb 17 17:58:24 crc kubenswrapper[4680]: I0217 17:58:24.208165 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-utilities\") pod \"community-operators-4fdmc\" (UID: \"2ca28f94-7701-41c7-a87c-aa86ffbfb09d\") " pod="openshift-marketplace/community-operators-4fdmc" Feb 17 17:58:24 crc kubenswrapper[4680]: I0217 17:58:24.208357 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-catalog-content\") pod \"community-operators-4fdmc\" (UID: \"2ca28f94-7701-41c7-a87c-aa86ffbfb09d\") " pod="openshift-marketplace/community-operators-4fdmc" Feb 17 17:58:24 crc kubenswrapper[4680]: I0217 17:58:24.208421 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr4p8\" (UniqueName: \"kubernetes.io/projected/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-kube-api-access-nr4p8\") pod \"community-operators-4fdmc\" (UID: \"2ca28f94-7701-41c7-a87c-aa86ffbfb09d\") " pod="openshift-marketplace/community-operators-4fdmc" Feb 17 17:58:24 crc kubenswrapper[4680]: I0217 17:58:24.309802 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-utilities\") pod \"community-operators-4fdmc\" (UID: \"2ca28f94-7701-41c7-a87c-aa86ffbfb09d\") " pod="openshift-marketplace/community-operators-4fdmc" Feb 17 17:58:24 crc kubenswrapper[4680]: I0217 17:58:24.309865 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-catalog-content\") pod \"community-operators-4fdmc\" (UID: \"2ca28f94-7701-41c7-a87c-aa86ffbfb09d\") " pod="openshift-marketplace/community-operators-4fdmc" Feb 17 17:58:24 crc kubenswrapper[4680]: I0217 17:58:24.309884 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr4p8\" (UniqueName: \"kubernetes.io/projected/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-kube-api-access-nr4p8\") pod \"community-operators-4fdmc\" (UID: \"2ca28f94-7701-41c7-a87c-aa86ffbfb09d\") " pod="openshift-marketplace/community-operators-4fdmc" Feb 17 17:58:24 crc kubenswrapper[4680]: I0217 17:58:24.310470 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-utilities\") pod \"community-operators-4fdmc\" (UID: \"2ca28f94-7701-41c7-a87c-aa86ffbfb09d\") " pod="openshift-marketplace/community-operators-4fdmc" Feb 17 17:58:24 crc kubenswrapper[4680]: I0217 17:58:24.310524 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-catalog-content\") pod \"community-operators-4fdmc\" (UID: \"2ca28f94-7701-41c7-a87c-aa86ffbfb09d\") " pod="openshift-marketplace/community-operators-4fdmc" Feb 17 17:58:24 crc kubenswrapper[4680]: I0217 17:58:24.339915 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr4p8\" (UniqueName: \"kubernetes.io/projected/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-kube-api-access-nr4p8\") pod \"community-operators-4fdmc\" (UID: \"2ca28f94-7701-41c7-a87c-aa86ffbfb09d\") " pod="openshift-marketplace/community-operators-4fdmc" Feb 17 17:58:24 crc kubenswrapper[4680]: I0217 17:58:24.472628 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fdmc" Feb 17 17:58:25 crc kubenswrapper[4680]: I0217 17:58:25.103436 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4fdmc"] Feb 17 17:58:25 crc kubenswrapper[4680]: I0217 17:58:25.774013 4680 generic.go:334] "Generic (PLEG): container finished" podID="2ca28f94-7701-41c7-a87c-aa86ffbfb09d" containerID="d1cba4e673fbd50e30069835719858d14ee86d7b041731bc112f0a7b888dbaed" exitCode=0 Feb 17 17:58:25 crc kubenswrapper[4680]: I0217 17:58:25.774054 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fdmc" event={"ID":"2ca28f94-7701-41c7-a87c-aa86ffbfb09d","Type":"ContainerDied","Data":"d1cba4e673fbd50e30069835719858d14ee86d7b041731bc112f0a7b888dbaed"} Feb 17 17:58:25 crc kubenswrapper[4680]: I0217 17:58:25.774078 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fdmc" event={"ID":"2ca28f94-7701-41c7-a87c-aa86ffbfb09d","Type":"ContainerStarted","Data":"a8019f7d31d0b38e469cb2370b2fcba15f25f3af8fb7316c5aaacd3baef9797e"} Feb 17 17:58:27 crc kubenswrapper[4680]: I0217 17:58:27.310376 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2x8gn"] Feb 17 17:58:27 crc kubenswrapper[4680]: I0217 17:58:27.312011 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2x8gn" Feb 17 17:58:27 crc kubenswrapper[4680]: I0217 17:58:27.342039 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2x8gn"] Feb 17 17:58:27 crc kubenswrapper[4680]: I0217 17:58:27.356026 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580f185a-0690-4df3-9fc5-5d03800da23a-utilities\") pod \"redhat-marketplace-2x8gn\" (UID: \"580f185a-0690-4df3-9fc5-5d03800da23a\") " pod="openshift-marketplace/redhat-marketplace-2x8gn" Feb 17 17:58:27 crc kubenswrapper[4680]: I0217 17:58:27.356079 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rbvr\" (UniqueName: \"kubernetes.io/projected/580f185a-0690-4df3-9fc5-5d03800da23a-kube-api-access-4rbvr\") pod \"redhat-marketplace-2x8gn\" (UID: \"580f185a-0690-4df3-9fc5-5d03800da23a\") " pod="openshift-marketplace/redhat-marketplace-2x8gn" Feb 17 17:58:27 crc kubenswrapper[4680]: I0217 17:58:27.356123 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580f185a-0690-4df3-9fc5-5d03800da23a-catalog-content\") pod \"redhat-marketplace-2x8gn\" (UID: \"580f185a-0690-4df3-9fc5-5d03800da23a\") " pod="openshift-marketplace/redhat-marketplace-2x8gn" Feb 17 17:58:27 crc kubenswrapper[4680]: I0217 17:58:27.457790 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580f185a-0690-4df3-9fc5-5d03800da23a-utilities\") pod \"redhat-marketplace-2x8gn\" (UID: \"580f185a-0690-4df3-9fc5-5d03800da23a\") " pod="openshift-marketplace/redhat-marketplace-2x8gn" Feb 17 17:58:27 crc kubenswrapper[4680]: I0217 17:58:27.457849 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rbvr\" (UniqueName: \"kubernetes.io/projected/580f185a-0690-4df3-9fc5-5d03800da23a-kube-api-access-4rbvr\") pod \"redhat-marketplace-2x8gn\" (UID: \"580f185a-0690-4df3-9fc5-5d03800da23a\") " pod="openshift-marketplace/redhat-marketplace-2x8gn" Feb 17 17:58:27 crc kubenswrapper[4680]: I0217 17:58:27.457890 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580f185a-0690-4df3-9fc5-5d03800da23a-catalog-content\") pod \"redhat-marketplace-2x8gn\" (UID: \"580f185a-0690-4df3-9fc5-5d03800da23a\") " pod="openshift-marketplace/redhat-marketplace-2x8gn" Feb 17 17:58:27 crc kubenswrapper[4680]: I0217 17:58:27.458612 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580f185a-0690-4df3-9fc5-5d03800da23a-utilities\") pod \"redhat-marketplace-2x8gn\" (UID: \"580f185a-0690-4df3-9fc5-5d03800da23a\") " pod="openshift-marketplace/redhat-marketplace-2x8gn" Feb 17 17:58:27 crc kubenswrapper[4680]: I0217 17:58:27.458644 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580f185a-0690-4df3-9fc5-5d03800da23a-catalog-content\") pod \"redhat-marketplace-2x8gn\" (UID: \"580f185a-0690-4df3-9fc5-5d03800da23a\") " pod="openshift-marketplace/redhat-marketplace-2x8gn" Feb 17 17:58:27 crc kubenswrapper[4680]: I0217 17:58:27.539454 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rbvr\" (UniqueName: \"kubernetes.io/projected/580f185a-0690-4df3-9fc5-5d03800da23a-kube-api-access-4rbvr\") pod \"redhat-marketplace-2x8gn\" (UID: \"580f185a-0690-4df3-9fc5-5d03800da23a\") " pod="openshift-marketplace/redhat-marketplace-2x8gn" Feb 17 17:58:27 crc kubenswrapper[4680]: I0217 17:58:27.632957 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2x8gn" Feb 17 17:58:27 crc kubenswrapper[4680]: I0217 17:58:27.788023 4680 generic.go:334] "Generic (PLEG): container finished" podID="2ca28f94-7701-41c7-a87c-aa86ffbfb09d" containerID="8f6e999fd77af53beb558cbb8ca6bdfa43444b2899f985788e945e06800943d2" exitCode=0 Feb 17 17:58:27 crc kubenswrapper[4680]: I0217 17:58:27.788105 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fdmc" event={"ID":"2ca28f94-7701-41c7-a87c-aa86ffbfb09d","Type":"ContainerDied","Data":"8f6e999fd77af53beb558cbb8ca6bdfa43444b2899f985788e945e06800943d2"} Feb 17 17:58:28 crc kubenswrapper[4680]: I0217 17:58:28.161505 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2x8gn"] Feb 17 17:58:28 crc kubenswrapper[4680]: W0217 17:58:28.180828 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod580f185a_0690_4df3_9fc5_5d03800da23a.slice/crio-91752b2d28606dcca34b2ff643d77b7834135ec7f175edfcc44b04e9a1153c40 WatchSource:0}: Error finding container 91752b2d28606dcca34b2ff643d77b7834135ec7f175edfcc44b04e9a1153c40: Status 404 returned error can't find the container with id 91752b2d28606dcca34b2ff643d77b7834135ec7f175edfcc44b04e9a1153c40 Feb 17 17:58:28 crc kubenswrapper[4680]: I0217 17:58:28.795430 4680 generic.go:334] "Generic (PLEG): container finished" podID="580f185a-0690-4df3-9fc5-5d03800da23a" containerID="818353a09f20fc03b39e3dbe83197ef6193bf7c136e20a2736071587d5771f7c" exitCode=0 Feb 17 17:58:28 crc kubenswrapper[4680]: I0217 17:58:28.795542 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2x8gn" event={"ID":"580f185a-0690-4df3-9fc5-5d03800da23a","Type":"ContainerDied","Data":"818353a09f20fc03b39e3dbe83197ef6193bf7c136e20a2736071587d5771f7c"} Feb 17 17:58:28 crc kubenswrapper[4680]: I0217 17:58:28.795611 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2x8gn" event={"ID":"580f185a-0690-4df3-9fc5-5d03800da23a","Type":"ContainerStarted","Data":"91752b2d28606dcca34b2ff643d77b7834135ec7f175edfcc44b04e9a1153c40"} Feb 17 17:58:28 crc kubenswrapper[4680]: I0217 17:58:28.797439 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fdmc" event={"ID":"2ca28f94-7701-41c7-a87c-aa86ffbfb09d","Type":"ContainerStarted","Data":"c1dd290717706209615a11c583cc9c0c9a60bf733b90f75142631b93d6bbf810"} Feb 17 17:58:28 crc kubenswrapper[4680]: I0217 17:58:28.834864 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4fdmc" podStartSLOduration=2.398998485 podStartE2EDuration="4.834846947s" podCreationTimestamp="2026-02-17 17:58:24 +0000 UTC" firstStartedPulling="2026-02-17 17:58:25.775383492 +0000 UTC m=+835.326282171" lastFinishedPulling="2026-02-17 17:58:28.211231954 +0000 UTC m=+837.762130633" observedRunningTime="2026-02-17 17:58:28.83239794 +0000 UTC m=+838.383296629" watchObservedRunningTime="2026-02-17 17:58:28.834846947 +0000 UTC m=+838.385745626" Feb 17 17:58:29 crc kubenswrapper[4680]: I0217 17:58:29.805449 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2x8gn" event={"ID":"580f185a-0690-4df3-9fc5-5d03800da23a","Type":"ContainerStarted","Data":"ec228d9f018e76359b7efca226cdbc44490ebacd9fb7ebaa77b6185af7c991dc"} Feb 17 17:58:30 crc kubenswrapper[4680]: I0217 17:58:30.812256 4680 generic.go:334] "Generic (PLEG): container finished" podID="580f185a-0690-4df3-9fc5-5d03800da23a" containerID="ec228d9f018e76359b7efca226cdbc44490ebacd9fb7ebaa77b6185af7c991dc" exitCode=0 Feb 17 17:58:30 crc kubenswrapper[4680]: I0217 17:58:30.812565 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2x8gn" event={"ID":"580f185a-0690-4df3-9fc5-5d03800da23a","Type":"ContainerDied","Data":"ec228d9f018e76359b7efca226cdbc44490ebacd9fb7ebaa77b6185af7c991dc"} Feb 17 17:58:31 crc kubenswrapper[4680]: I0217 17:58:31.820622 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2x8gn" event={"ID":"580f185a-0690-4df3-9fc5-5d03800da23a","Type":"ContainerStarted","Data":"bd73c4ac502613437f5476869158ca9a1522da657871d2a9f4607881ae64c621"} Feb 17 17:58:31 crc kubenswrapper[4680]: I0217 17:58:31.841365 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2x8gn" podStartSLOduration=2.3710010329999998 podStartE2EDuration="4.841344576s" podCreationTimestamp="2026-02-17 17:58:27 +0000 UTC" firstStartedPulling="2026-02-17 17:58:28.799407036 +0000 UTC m=+838.350305715" lastFinishedPulling="2026-02-17 17:58:31.269750589 +0000 UTC m=+840.820649258" observedRunningTime="2026-02-17 17:58:31.836307927 +0000 UTC m=+841.387206606" watchObservedRunningTime="2026-02-17 17:58:31.841344576 +0000 UTC m=+841.392243255" Feb 17 17:58:32 crc kubenswrapper[4680]: I0217 17:58:32.775813 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5498fb9db9-fg6q4" Feb 17 17:58:34 crc kubenswrapper[4680]: I0217 17:58:34.473190 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4fdmc" Feb 17 17:58:34 crc kubenswrapper[4680]: I0217 17:58:34.473252 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4fdmc" Feb 17 17:58:34 crc kubenswrapper[4680]: I0217 17:58:34.529274 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4fdmc" Feb 17 17:58:34 crc kubenswrapper[4680]: I0217 17:58:34.880854 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4fdmc" Feb 17 17:58:35 crc kubenswrapper[4680]: I0217 17:58:35.500608 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4fdmc"] Feb 17 17:58:36 crc kubenswrapper[4680]: I0217 17:58:36.848468 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4fdmc" podUID="2ca28f94-7701-41c7-a87c-aa86ffbfb09d" containerName="registry-server" containerID="cri-o://c1dd290717706209615a11c583cc9c0c9a60bf733b90f75142631b93d6bbf810" gracePeriod=2 Feb 17 17:58:37 crc kubenswrapper[4680]: I0217 17:58:37.633787 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2x8gn" Feb 17 17:58:37 crc kubenswrapper[4680]: I0217 17:58:37.633849 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2x8gn" Feb 17 17:58:37 crc kubenswrapper[4680]: I0217 17:58:37.696025 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2x8gn" Feb 17 17:58:37 crc kubenswrapper[4680]: I0217 17:58:37.855354 4680 generic.go:334] "Generic (PLEG): container finished" podID="2ca28f94-7701-41c7-a87c-aa86ffbfb09d" containerID="c1dd290717706209615a11c583cc9c0c9a60bf733b90f75142631b93d6bbf810" exitCode=0 Feb 17 17:58:37 crc kubenswrapper[4680]: I0217 17:58:37.856017 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fdmc" event={"ID":"2ca28f94-7701-41c7-a87c-aa86ffbfb09d","Type":"ContainerDied","Data":"c1dd290717706209615a11c583cc9c0c9a60bf733b90f75142631b93d6bbf810"} Feb 17 17:58:37 crc kubenswrapper[4680]: I0217 17:58:37.856046 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fdmc" event={"ID":"2ca28f94-7701-41c7-a87c-aa86ffbfb09d","Type":"ContainerDied","Data":"a8019f7d31d0b38e469cb2370b2fcba15f25f3af8fb7316c5aaacd3baef9797e"} Feb 17 17:58:37 crc kubenswrapper[4680]: I0217 17:58:37.856058 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8019f7d31d0b38e469cb2370b2fcba15f25f3af8fb7316c5aaacd3baef9797e" Feb 17 17:58:37 crc kubenswrapper[4680]: I0217 17:58:37.874286 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fdmc" Feb 17 17:58:37 crc kubenswrapper[4680]: I0217 17:58:37.909483 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2x8gn" Feb 17 17:58:37 crc kubenswrapper[4680]: I0217 17:58:37.998212 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr4p8\" (UniqueName: \"kubernetes.io/projected/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-kube-api-access-nr4p8\") pod \"2ca28f94-7701-41c7-a87c-aa86ffbfb09d\" (UID: \"2ca28f94-7701-41c7-a87c-aa86ffbfb09d\") " Feb 17 17:58:37 crc kubenswrapper[4680]: I0217 17:58:37.998388 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-catalog-content\") pod \"2ca28f94-7701-41c7-a87c-aa86ffbfb09d\" (UID: \"2ca28f94-7701-41c7-a87c-aa86ffbfb09d\") " Feb 17 17:58:37 crc kubenswrapper[4680]: I0217 17:58:37.998454 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-utilities\") pod \"2ca28f94-7701-41c7-a87c-aa86ffbfb09d\" (UID: \"2ca28f94-7701-41c7-a87c-aa86ffbfb09d\") " Feb 17 17:58:37 crc kubenswrapper[4680]: I0217 17:58:37.999440 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-utilities" (OuterVolumeSpecName: "utilities") pod "2ca28f94-7701-41c7-a87c-aa86ffbfb09d" (UID: "2ca28f94-7701-41c7-a87c-aa86ffbfb09d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:58:38 crc kubenswrapper[4680]: I0217 17:58:38.004564 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-kube-api-access-nr4p8" (OuterVolumeSpecName: "kube-api-access-nr4p8") pod "2ca28f94-7701-41c7-a87c-aa86ffbfb09d" (UID: "2ca28f94-7701-41c7-a87c-aa86ffbfb09d"). InnerVolumeSpecName "kube-api-access-nr4p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:58:38 crc kubenswrapper[4680]: I0217 17:58:38.050842 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ca28f94-7701-41c7-a87c-aa86ffbfb09d" (UID: "2ca28f94-7701-41c7-a87c-aa86ffbfb09d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:58:38 crc kubenswrapper[4680]: I0217 17:58:38.100247 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:58:38 crc kubenswrapper[4680]: I0217 17:58:38.100294 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:58:38 crc kubenswrapper[4680]: I0217 17:58:38.100306 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr4p8\" (UniqueName: \"kubernetes.io/projected/2ca28f94-7701-41c7-a87c-aa86ffbfb09d-kube-api-access-nr4p8\") on node \"crc\" DevicePath \"\"" Feb 17 17:58:38 crc kubenswrapper[4680]: I0217 17:58:38.882914 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fdmc" Feb 17 17:58:38 crc kubenswrapper[4680]: I0217 17:58:38.960008 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4fdmc"] Feb 17 17:58:38 crc kubenswrapper[4680]: I0217 17:58:38.966649 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4fdmc"] Feb 17 17:58:39 crc kubenswrapper[4680]: I0217 17:58:39.111445 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ca28f94-7701-41c7-a87c-aa86ffbfb09d" path="/var/lib/kubelet/pods/2ca28f94-7701-41c7-a87c-aa86ffbfb09d/volumes" Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.299628 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2x8gn"] Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.300171 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2x8gn" podUID="580f185a-0690-4df3-9fc5-5d03800da23a" containerName="registry-server" containerID="cri-o://bd73c4ac502613437f5476869158ca9a1522da657871d2a9f4607881ae64c621" gracePeriod=2 Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.657229 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2x8gn" Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.741623 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580f185a-0690-4df3-9fc5-5d03800da23a-catalog-content\") pod \"580f185a-0690-4df3-9fc5-5d03800da23a\" (UID: \"580f185a-0690-4df3-9fc5-5d03800da23a\") " Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.741692 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580f185a-0690-4df3-9fc5-5d03800da23a-utilities\") pod \"580f185a-0690-4df3-9fc5-5d03800da23a\" (UID: \"580f185a-0690-4df3-9fc5-5d03800da23a\") " Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.741768 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rbvr\" (UniqueName: \"kubernetes.io/projected/580f185a-0690-4df3-9fc5-5d03800da23a-kube-api-access-4rbvr\") pod \"580f185a-0690-4df3-9fc5-5d03800da23a\" (UID: \"580f185a-0690-4df3-9fc5-5d03800da23a\") " Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.742561 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/580f185a-0690-4df3-9fc5-5d03800da23a-utilities" (OuterVolumeSpecName: "utilities") pod "580f185a-0690-4df3-9fc5-5d03800da23a" (UID: "580f185a-0690-4df3-9fc5-5d03800da23a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.751545 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580f185a-0690-4df3-9fc5-5d03800da23a-kube-api-access-4rbvr" (OuterVolumeSpecName: "kube-api-access-4rbvr") pod "580f185a-0690-4df3-9fc5-5d03800da23a" (UID: "580f185a-0690-4df3-9fc5-5d03800da23a"). InnerVolumeSpecName "kube-api-access-4rbvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.771525 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/580f185a-0690-4df3-9fc5-5d03800da23a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "580f185a-0690-4df3-9fc5-5d03800da23a" (UID: "580f185a-0690-4df3-9fc5-5d03800da23a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.843700 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rbvr\" (UniqueName: \"kubernetes.io/projected/580f185a-0690-4df3-9fc5-5d03800da23a-kube-api-access-4rbvr\") on node \"crc\" DevicePath \"\"" Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.843751 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580f185a-0690-4df3-9fc5-5d03800da23a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.843765 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580f185a-0690-4df3-9fc5-5d03800da23a-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.899821 4680 generic.go:334] "Generic (PLEG): container finished" podID="580f185a-0690-4df3-9fc5-5d03800da23a" containerID="bd73c4ac502613437f5476869158ca9a1522da657871d2a9f4607881ae64c621" exitCode=0 Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.899861 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2x8gn" Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.899879 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2x8gn" event={"ID":"580f185a-0690-4df3-9fc5-5d03800da23a","Type":"ContainerDied","Data":"bd73c4ac502613437f5476869158ca9a1522da657871d2a9f4607881ae64c621"} Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.899902 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2x8gn" event={"ID":"580f185a-0690-4df3-9fc5-5d03800da23a","Type":"ContainerDied","Data":"91752b2d28606dcca34b2ff643d77b7834135ec7f175edfcc44b04e9a1153c40"} Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.899920 4680 scope.go:117] "RemoveContainer" containerID="bd73c4ac502613437f5476869158ca9a1522da657871d2a9f4607881ae64c621" Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.919600 4680 scope.go:117] "RemoveContainer" containerID="ec228d9f018e76359b7efca226cdbc44490ebacd9fb7ebaa77b6185af7c991dc" Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.934653 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2x8gn"] Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.938526 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2x8gn"] Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.945499 4680 scope.go:117] "RemoveContainer" containerID="818353a09f20fc03b39e3dbe83197ef6193bf7c136e20a2736071587d5771f7c" Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.959541 4680 scope.go:117] "RemoveContainer" containerID="bd73c4ac502613437f5476869158ca9a1522da657871d2a9f4607881ae64c621" Feb 17 17:58:40 crc kubenswrapper[4680]: E0217 17:58:40.960014 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd73c4ac502613437f5476869158ca9a1522da657871d2a9f4607881ae64c621\": container with ID starting with bd73c4ac502613437f5476869158ca9a1522da657871d2a9f4607881ae64c621 not found: ID does not exist" containerID="bd73c4ac502613437f5476869158ca9a1522da657871d2a9f4607881ae64c621" Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.960041 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd73c4ac502613437f5476869158ca9a1522da657871d2a9f4607881ae64c621"} err="failed to get container status \"bd73c4ac502613437f5476869158ca9a1522da657871d2a9f4607881ae64c621\": rpc error: code = NotFound desc = could not find container \"bd73c4ac502613437f5476869158ca9a1522da657871d2a9f4607881ae64c621\": container with ID starting with bd73c4ac502613437f5476869158ca9a1522da657871d2a9f4607881ae64c621 not found: ID does not exist" Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.960064 4680 scope.go:117] "RemoveContainer" containerID="ec228d9f018e76359b7efca226cdbc44490ebacd9fb7ebaa77b6185af7c991dc" Feb 17 17:58:40 crc kubenswrapper[4680]: E0217 17:58:40.960332 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec228d9f018e76359b7efca226cdbc44490ebacd9fb7ebaa77b6185af7c991dc\": container with ID starting with ec228d9f018e76359b7efca226cdbc44490ebacd9fb7ebaa77b6185af7c991dc not found: ID does not exist" containerID="ec228d9f018e76359b7efca226cdbc44490ebacd9fb7ebaa77b6185af7c991dc" Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.960350 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec228d9f018e76359b7efca226cdbc44490ebacd9fb7ebaa77b6185af7c991dc"} err="failed to get container status \"ec228d9f018e76359b7efca226cdbc44490ebacd9fb7ebaa77b6185af7c991dc\": rpc error: code = NotFound desc = could not find container \"ec228d9f018e76359b7efca226cdbc44490ebacd9fb7ebaa77b6185af7c991dc\": container with ID starting with ec228d9f018e76359b7efca226cdbc44490ebacd9fb7ebaa77b6185af7c991dc not found: ID does not exist" Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.960362 4680 scope.go:117] "RemoveContainer" containerID="818353a09f20fc03b39e3dbe83197ef6193bf7c136e20a2736071587d5771f7c" Feb 17 17:58:40 crc kubenswrapper[4680]: E0217 17:58:40.960633 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"818353a09f20fc03b39e3dbe83197ef6193bf7c136e20a2736071587d5771f7c\": container with ID starting with 818353a09f20fc03b39e3dbe83197ef6193bf7c136e20a2736071587d5771f7c not found: ID does not exist" containerID="818353a09f20fc03b39e3dbe83197ef6193bf7c136e20a2736071587d5771f7c" Feb 17 17:58:40 crc kubenswrapper[4680]: I0217 17:58:40.960653 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"818353a09f20fc03b39e3dbe83197ef6193bf7c136e20a2736071587d5771f7c"} err="failed to get container status \"818353a09f20fc03b39e3dbe83197ef6193bf7c136e20a2736071587d5771f7c\": rpc error: code = NotFound desc = could not find container \"818353a09f20fc03b39e3dbe83197ef6193bf7c136e20a2736071587d5771f7c\": container with ID starting with 818353a09f20fc03b39e3dbe83197ef6193bf7c136e20a2736071587d5771f7c not found: ID does not exist" Feb 17 17:58:41 crc kubenswrapper[4680]: I0217 17:58:41.117382 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="580f185a-0690-4df3-9fc5-5d03800da23a" path="/var/lib/kubelet/pods/580f185a-0690-4df3-9fc5-5d03800da23a/volumes" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.590353 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-9qzbv"] Feb 17 17:58:57 crc kubenswrapper[4680]: E0217 17:58:57.591111 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580f185a-0690-4df3-9fc5-5d03800da23a" containerName="registry-server" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.591126 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="580f185a-0690-4df3-9fc5-5d03800da23a" containerName="registry-server" Feb 17 17:58:57 crc kubenswrapper[4680]: E0217 17:58:57.591134 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca28f94-7701-41c7-a87c-aa86ffbfb09d" containerName="extract-content" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.591141 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca28f94-7701-41c7-a87c-aa86ffbfb09d" containerName="extract-content" Feb 17 17:58:57 crc kubenswrapper[4680]: E0217 17:58:57.591153 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580f185a-0690-4df3-9fc5-5d03800da23a" containerName="extract-utilities" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.591161 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="580f185a-0690-4df3-9fc5-5d03800da23a" containerName="extract-utilities" Feb 17 17:58:57 crc kubenswrapper[4680]: E0217 17:58:57.591169 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca28f94-7701-41c7-a87c-aa86ffbfb09d" containerName="extract-utilities" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.591175 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca28f94-7701-41c7-a87c-aa86ffbfb09d" containerName="extract-utilities" Feb 17 17:58:57 crc kubenswrapper[4680]: E0217 17:58:57.591189 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580f185a-0690-4df3-9fc5-5d03800da23a" containerName="extract-content" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.591195 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="580f185a-0690-4df3-9fc5-5d03800da23a" containerName="extract-content" Feb 17 17:58:57 crc kubenswrapper[4680]: E0217 17:58:57.591209 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca28f94-7701-41c7-a87c-aa86ffbfb09d" containerName="registry-server" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.591216 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca28f94-7701-41c7-a87c-aa86ffbfb09d" containerName="registry-server" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.591367 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca28f94-7701-41c7-a87c-aa86ffbfb09d" containerName="registry-server" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.591383 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="580f185a-0690-4df3-9fc5-5d03800da23a" containerName="registry-server" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.591884 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-9qzbv" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.597082 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-8d29w" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.598634 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-nbwj4"] Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.599659 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nbwj4" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.601202 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-chvld" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.610584 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-64dzx"] Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.611289 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64dzx" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.613559 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-d84xd" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.687172 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-zq66h"] Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.687987 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-zq66h" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.697825 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-9qzbv"] Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.699946 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-rj8tn" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.719423 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-zq66h"] Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.727400 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggsqs\" (UniqueName: \"kubernetes.io/projected/3a32bda7-58e5-494b-af42-d255934537ac-kube-api-access-ggsqs\") pod \"glance-operator-controller-manager-77987464f4-zq66h\" (UID: \"3a32bda7-58e5-494b-af42-d255934537ac\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-zq66h" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.727486 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mjsp\" (UniqueName: \"kubernetes.io/projected/7030ebc6-bc71-4388-97c3-14403919b886-kube-api-access-6mjsp\") pod \"cinder-operator-controller-manager-5d946d989d-nbwj4\" (UID: \"7030ebc6-bc71-4388-97c3-14403919b886\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nbwj4" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.727513 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxxd7\" (UniqueName: \"kubernetes.io/projected/5e432327-20d1-4688-8dda-173b62e27110-kube-api-access-zxxd7\") pod \"designate-operator-controller-manager-6d8bf5c495-64dzx\" (UID: \"5e432327-20d1-4688-8dda-173b62e27110\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64dzx" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.727536 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4pmt\" (UniqueName: \"kubernetes.io/projected/c4d30f2e-a54a-43ef-923f-103f4e0c54d5-kube-api-access-f4pmt\") pod \"barbican-operator-controller-manager-868647ff47-9qzbv\" (UID: \"c4d30f2e-a54a-43ef-923f-103f4e0c54d5\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-9qzbv" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.735302 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-nbwj4"] Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.757463 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-mqdqj"] Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.758256 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqdqj" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.763926 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-7rk4n" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.776955 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-64dzx"] Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.782662 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-mqdqj"] Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.801531 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wpzsn"] Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.802808 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wpzsn" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.806844 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-99pjp" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.812822 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h"] Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.813583 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.817718 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.817858 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-8twcs" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.831390 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wpzsn"] Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.832206 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mjsp\" (UniqueName: \"kubernetes.io/projected/7030ebc6-bc71-4388-97c3-14403919b886-kube-api-access-6mjsp\") pod \"cinder-operator-controller-manager-5d946d989d-nbwj4\" (UID: \"7030ebc6-bc71-4388-97c3-14403919b886\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nbwj4" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.832249 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxxd7\" (UniqueName: \"kubernetes.io/projected/5e432327-20d1-4688-8dda-173b62e27110-kube-api-access-zxxd7\") pod \"designate-operator-controller-manager-6d8bf5c495-64dzx\" (UID: \"5e432327-20d1-4688-8dda-173b62e27110\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64dzx" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.832274 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4pmt\" (UniqueName: \"kubernetes.io/projected/c4d30f2e-a54a-43ef-923f-103f4e0c54d5-kube-api-access-f4pmt\") pod \"barbican-operator-controller-manager-868647ff47-9qzbv\" (UID: \"c4d30f2e-a54a-43ef-923f-103f4e0c54d5\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-9qzbv" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.832291 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggsqs\" (UniqueName: \"kubernetes.io/projected/3a32bda7-58e5-494b-af42-d255934537ac-kube-api-access-ggsqs\") pod \"glance-operator-controller-manager-77987464f4-zq66h\" (UID: \"3a32bda7-58e5-494b-af42-d255934537ac\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-zq66h" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.834381 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fnnbk"] Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.835207 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fnnbk" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.843587 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-m5lx9" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.866857 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h"] Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.877173 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxxd7\" (UniqueName: \"kubernetes.io/projected/5e432327-20d1-4688-8dda-173b62e27110-kube-api-access-zxxd7\") pod \"designate-operator-controller-manager-6d8bf5c495-64dzx\" (UID: \"5e432327-20d1-4688-8dda-173b62e27110\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64dzx" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.888043 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggsqs\" (UniqueName: \"kubernetes.io/projected/3a32bda7-58e5-494b-af42-d255934537ac-kube-api-access-ggsqs\") pod \"glance-operator-controller-manager-77987464f4-zq66h\" (UID: \"3a32bda7-58e5-494b-af42-d255934537ac\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-zq66h" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.888282 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-xqwqr"] Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.889045 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xqwqr" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.893950 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4pmt\" (UniqueName: \"kubernetes.io/projected/c4d30f2e-a54a-43ef-923f-103f4e0c54d5-kube-api-access-f4pmt\") pod \"barbican-operator-controller-manager-868647ff47-9qzbv\" (UID: \"c4d30f2e-a54a-43ef-923f-103f4e0c54d5\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-9qzbv" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.894415 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-7v4nx" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.917501 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-9qzbv" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.924273 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mjsp\" (UniqueName: \"kubernetes.io/projected/7030ebc6-bc71-4388-97c3-14403919b886-kube-api-access-6mjsp\") pod \"cinder-operator-controller-manager-5d946d989d-nbwj4\" (UID: \"7030ebc6-bc71-4388-97c3-14403919b886\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nbwj4" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.935465 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqw48\" (UniqueName: \"kubernetes.io/projected/09667f14-d7ef-4502-9880-b0f14e2247b8-kube-api-access-dqw48\") pod \"keystone-operator-controller-manager-b4d948c87-xqwqr\" (UID: \"09667f14-d7ef-4502-9880-b0f14e2247b8\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xqwqr" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.935504 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjdlh\" (UniqueName: \"kubernetes.io/projected/85f8c95f-eecb-465c-be1f-c311edcbe597-kube-api-access-gjdlh\") pod \"horizon-operator-controller-manager-5b9b8895d5-wpzsn\" (UID: \"85f8c95f-eecb-465c-be1f-c311edcbe597\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wpzsn" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.935526 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c4lv\" (UniqueName: \"kubernetes.io/projected/a61ed4b7-335a-429e-8c5a-17b2d322c78e-kube-api-access-7c4lv\") pod \"infra-operator-controller-manager-ff5c8777-ft44h\" (UID: \"a61ed4b7-335a-429e-8c5a-17b2d322c78e\") " pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.935558 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9dnt\" (UniqueName: \"kubernetes.io/projected/dd0b1acf-7e52-47f1-9fa3-c82be1b7ce8f-kube-api-access-h9dnt\") pod \"heat-operator-controller-manager-69f49c598c-mqdqj\" (UID: \"dd0b1acf-7e52-47f1-9fa3-c82be1b7ce8f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqdqj" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.935582 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46phb\" (UniqueName: \"kubernetes.io/projected/25257448-9b91-4b64-9279-e3e51e752019-kube-api-access-46phb\") pod \"ironic-operator-controller-manager-554564d7fc-fnnbk\" (UID: \"25257448-9b91-4b64-9279-e3e51e752019\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fnnbk" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.935624 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert\") pod \"infra-operator-controller-manager-ff5c8777-ft44h\" (UID: \"a61ed4b7-335a-429e-8c5a-17b2d322c78e\") " pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.943714 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fnnbk"] Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.963956 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-xqwqr"] Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.964555 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64dzx" Feb 17 17:58:57 crc kubenswrapper[4680]: I0217 17:58:57.968619 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nbwj4" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:57.998407 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-th6kc"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:57.999206 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-th6kc" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.008600 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-pl6nx" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.015028 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-zq66h" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.036446 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjdlh\" (UniqueName: \"kubernetes.io/projected/85f8c95f-eecb-465c-be1f-c311edcbe597-kube-api-access-gjdlh\") pod \"horizon-operator-controller-manager-5b9b8895d5-wpzsn\" (UID: \"85f8c95f-eecb-465c-be1f-c311edcbe597\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wpzsn" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.036494 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqw48\" (UniqueName: \"kubernetes.io/projected/09667f14-d7ef-4502-9880-b0f14e2247b8-kube-api-access-dqw48\") pod \"keystone-operator-controller-manager-b4d948c87-xqwqr\" (UID: \"09667f14-d7ef-4502-9880-b0f14e2247b8\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xqwqr" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.036529 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c4lv\" (UniqueName: \"kubernetes.io/projected/a61ed4b7-335a-429e-8c5a-17b2d322c78e-kube-api-access-7c4lv\") pod \"infra-operator-controller-manager-ff5c8777-ft44h\" (UID: \"a61ed4b7-335a-429e-8c5a-17b2d322c78e\") " pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.036574 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9dnt\" (UniqueName: \"kubernetes.io/projected/dd0b1acf-7e52-47f1-9fa3-c82be1b7ce8f-kube-api-access-h9dnt\") pod \"heat-operator-controller-manager-69f49c598c-mqdqj\" (UID: \"dd0b1acf-7e52-47f1-9fa3-c82be1b7ce8f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqdqj" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.036609 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46phb\" (UniqueName: \"kubernetes.io/projected/25257448-9b91-4b64-9279-e3e51e752019-kube-api-access-46phb\") pod \"ironic-operator-controller-manager-554564d7fc-fnnbk\" (UID: \"25257448-9b91-4b64-9279-e3e51e752019\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fnnbk" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.036674 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert\") pod \"infra-operator-controller-manager-ff5c8777-ft44h\" (UID: \"a61ed4b7-335a-429e-8c5a-17b2d322c78e\") " pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" Feb 17 17:58:58 crc kubenswrapper[4680]: E0217 17:58:58.036818 4680 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 17:58:58 crc kubenswrapper[4680]: E0217 17:58:58.036968 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert podName:a61ed4b7-335a-429e-8c5a-17b2d322c78e nodeName:}" failed. No retries permitted until 2026-02-17 17:58:58.536932912 +0000 UTC m=+868.087831591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert") pod "infra-operator-controller-manager-ff5c8777-ft44h" (UID: "a61ed4b7-335a-429e-8c5a-17b2d322c78e") : secret "infra-operator-webhook-server-cert" not found Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.059832 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-bxxqw"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.060633 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bxxqw" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.072482 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gkq9t" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.078633 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c4lv\" (UniqueName: \"kubernetes.io/projected/a61ed4b7-335a-429e-8c5a-17b2d322c78e-kube-api-access-7c4lv\") pod \"infra-operator-controller-manager-ff5c8777-ft44h\" (UID: \"a61ed4b7-335a-429e-8c5a-17b2d322c78e\") " pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.079412 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-th6kc"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.083805 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9dnt\" (UniqueName: \"kubernetes.io/projected/dd0b1acf-7e52-47f1-9fa3-c82be1b7ce8f-kube-api-access-h9dnt\") pod \"heat-operator-controller-manager-69f49c598c-mqdqj\" (UID: \"dd0b1acf-7e52-47f1-9fa3-c82be1b7ce8f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqdqj" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.085067 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjdlh\" (UniqueName: \"kubernetes.io/projected/85f8c95f-eecb-465c-be1f-c311edcbe597-kube-api-access-gjdlh\") pod \"horizon-operator-controller-manager-5b9b8895d5-wpzsn\" (UID: \"85f8c95f-eecb-465c-be1f-c311edcbe597\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wpzsn" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.087111 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqw48\" (UniqueName: \"kubernetes.io/projected/09667f14-d7ef-4502-9880-b0f14e2247b8-kube-api-access-dqw48\") pod \"keystone-operator-controller-manager-b4d948c87-xqwqr\" (UID: \"09667f14-d7ef-4502-9880-b0f14e2247b8\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xqwqr" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.088726 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46phb\" (UniqueName: \"kubernetes.io/projected/25257448-9b91-4b64-9279-e3e51e752019-kube-api-access-46phb\") pod \"ironic-operator-controller-manager-554564d7fc-fnnbk\" (UID: \"25257448-9b91-4b64-9279-e3e51e752019\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fnnbk" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.097891 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-n45mv"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.098647 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-n45mv" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.101273 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-9dr9g" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.124806 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wpzsn" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.138551 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5h7b\" (UniqueName: \"kubernetes.io/projected/01a45563-cabc-4dbc-80cb-49b274ab96cd-kube-api-access-k5h7b\") pod \"manila-operator-controller-manager-54f6768c69-bxxqw\" (UID: \"01a45563-cabc-4dbc-80cb-49b274ab96cd\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bxxqw" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.138610 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwl58\" (UniqueName: \"kubernetes.io/projected/44afd1e8-ecdc-4854-bafa-d2bdabe26c5b-kube-api-access-mwl58\") pod \"neutron-operator-controller-manager-64ddbf8bb-n45mv\" (UID: \"44afd1e8-ecdc-4854-bafa-d2bdabe26c5b\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-n45mv" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.138680 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r825r\" (UniqueName: \"kubernetes.io/projected/27bd44a3-f6bc-462d-92a4-53287bd71b8f-kube-api-access-r825r\") pod \"mariadb-operator-controller-manager-6994f66f48-th6kc\" (UID: \"27bd44a3-f6bc-462d-92a4-53287bd71b8f\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-th6kc" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.157371 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-bxxqw"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.173249 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fnnbk" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.179075 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-jpnmh"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.179918 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jpnmh" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.193665 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-bfkrw" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.222663 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-n45mv"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.244669 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5h7b\" (UniqueName: \"kubernetes.io/projected/01a45563-cabc-4dbc-80cb-49b274ab96cd-kube-api-access-k5h7b\") pod \"manila-operator-controller-manager-54f6768c69-bxxqw\" (UID: \"01a45563-cabc-4dbc-80cb-49b274ab96cd\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bxxqw" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.244737 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwl58\" (UniqueName: \"kubernetes.io/projected/44afd1e8-ecdc-4854-bafa-d2bdabe26c5b-kube-api-access-mwl58\") pod \"neutron-operator-controller-manager-64ddbf8bb-n45mv\" (UID: \"44afd1e8-ecdc-4854-bafa-d2bdabe26c5b\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-n45mv" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.244798 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r825r\" (UniqueName: \"kubernetes.io/projected/27bd44a3-f6bc-462d-92a4-53287bd71b8f-kube-api-access-r825r\") pod \"mariadb-operator-controller-manager-6994f66f48-th6kc\" (UID: \"27bd44a3-f6bc-462d-92a4-53287bd71b8f\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-th6kc" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.246268 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-7pstg"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.255520 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-jpnmh"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.255637 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7pstg" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.259892 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-fsrrp" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.299647 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r825r\" (UniqueName: \"kubernetes.io/projected/27bd44a3-f6bc-462d-92a4-53287bd71b8f-kube-api-access-r825r\") pod \"mariadb-operator-controller-manager-6994f66f48-th6kc\" (UID: \"27bd44a3-f6bc-462d-92a4-53287bd71b8f\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-th6kc" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.306190 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5h7b\" (UniqueName: \"kubernetes.io/projected/01a45563-cabc-4dbc-80cb-49b274ab96cd-kube-api-access-k5h7b\") pod \"manila-operator-controller-manager-54f6768c69-bxxqw\" (UID: \"01a45563-cabc-4dbc-80cb-49b274ab96cd\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bxxqw" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.306848 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwl58\" (UniqueName: \"kubernetes.io/projected/44afd1e8-ecdc-4854-bafa-d2bdabe26c5b-kube-api-access-mwl58\") pod \"neutron-operator-controller-manager-64ddbf8bb-n45mv\" (UID: \"44afd1e8-ecdc-4854-bafa-d2bdabe26c5b\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-n45mv" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.316570 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-7pstg"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.323477 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xqwqr" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.340355 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-th6kc" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.348644 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.374289 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.384610 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqdqj" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.385298 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.385566 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-sdsvh" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.389873 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fc4j\" (UniqueName: \"kubernetes.io/projected/73c67b77-ecaf-4925-aba5-855dc368e729-kube-api-access-7fc4j\") pod \"nova-operator-controller-manager-567668f5cf-jpnmh\" (UID: \"73c67b77-ecaf-4925-aba5-855dc368e729\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jpnmh" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.389941 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q285t\" (UniqueName: \"kubernetes.io/projected/5663f682-3b3a-4061-8d4a-377a103c0bd4-kube-api-access-q285t\") pod \"octavia-operator-controller-manager-69f8888797-7pstg\" (UID: \"5663f682-3b3a-4061-8d4a-377a103c0bd4\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7pstg" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.399249 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bxxqw" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.414053 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-htc8b"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.415413 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-htc8b" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.419507 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-fddcs" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.475953 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-n45mv" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.477101 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.481036 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-htc8b"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.490946 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fc4j\" (UniqueName: \"kubernetes.io/projected/73c67b77-ecaf-4925-aba5-855dc368e729-kube-api-access-7fc4j\") pod \"nova-operator-controller-manager-567668f5cf-jpnmh\" (UID: \"73c67b77-ecaf-4925-aba5-855dc368e729\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jpnmh" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.490993 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7zpk\" (UniqueName: \"kubernetes.io/projected/3a909135-a1d3-4a69-875d-74f19de59cf1-kube-api-access-p7zpk\") pod \"openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv\" (UID: \"3a909135-a1d3-4a69-875d-74f19de59cf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.491021 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q285t\" (UniqueName: \"kubernetes.io/projected/5663f682-3b3a-4061-8d4a-377a103c0bd4-kube-api-access-q285t\") pod \"octavia-operator-controller-manager-69f8888797-7pstg\" (UID: \"5663f682-3b3a-4061-8d4a-377a103c0bd4\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7pstg" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.491045 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert\") pod \"openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv\" (UID: \"3a909135-a1d3-4a69-875d-74f19de59cf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.491066 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq2l7\" (UniqueName: \"kubernetes.io/projected/36e854c8-4742-4138-8766-6614fa005bae-kube-api-access-kq2l7\") pod \"ovn-operator-controller-manager-d44cf6b75-htc8b\" (UID: \"36e854c8-4742-4138-8766-6614fa005bae\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-htc8b" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.529405 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fc4j\" (UniqueName: \"kubernetes.io/projected/73c67b77-ecaf-4925-aba5-855dc368e729-kube-api-access-7fc4j\") pod \"nova-operator-controller-manager-567668f5cf-jpnmh\" (UID: \"73c67b77-ecaf-4925-aba5-855dc368e729\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jpnmh" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.534137 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-d5n5g"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.535401 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-d5n5g" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.538051 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-z92nn" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.541028 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-rw6wk"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.543383 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rw6wk" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.546737 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jpnmh" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.547689 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-g6cdl" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.557383 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6628n"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.565760 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6628n" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.570011 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dhvhh" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.571332 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q285t\" (UniqueName: \"kubernetes.io/projected/5663f682-3b3a-4061-8d4a-377a103c0bd4-kube-api-access-q285t\") pod \"octavia-operator-controller-manager-69f8888797-7pstg\" (UID: \"5663f682-3b3a-4061-8d4a-377a103c0bd4\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7pstg" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.576271 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6628n"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.589083 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-d5n5g"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.593620 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert\") pod \"infra-operator-controller-manager-ff5c8777-ft44h\" (UID: \"a61ed4b7-335a-429e-8c5a-17b2d322c78e\") " pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.593686 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7zpk\" (UniqueName: \"kubernetes.io/projected/3a909135-a1d3-4a69-875d-74f19de59cf1-kube-api-access-p7zpk\") pod \"openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv\" (UID: \"3a909135-a1d3-4a69-875d-74f19de59cf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.593711 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcg9j\" (UniqueName: \"kubernetes.io/projected/eeb4f8c1-4744-4e7f-a8a4-a2deb95f4b45-kube-api-access-rcg9j\") pod \"placement-operator-controller-manager-8497b45c89-rw6wk\" (UID: \"eeb4f8c1-4744-4e7f-a8a4-a2deb95f4b45\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rw6wk" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.593823 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert\") pod \"openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv\" (UID: \"3a909135-a1d3-4a69-875d-74f19de59cf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.593848 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq2l7\" (UniqueName: \"kubernetes.io/projected/36e854c8-4742-4138-8766-6614fa005bae-kube-api-access-kq2l7\") pod \"ovn-operator-controller-manager-d44cf6b75-htc8b\" (UID: \"36e854c8-4742-4138-8766-6614fa005bae\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-htc8b" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.593951 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxj42\" (UniqueName: \"kubernetes.io/projected/41d9cacc-1317-479d-99ff-8eb7e6f3d6b6-kube-api-access-rxj42\") pod \"swift-operator-controller-manager-68f46476f-d5n5g\" (UID: \"41d9cacc-1317-479d-99ff-8eb7e6f3d6b6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-d5n5g" Feb 17 17:58:58 crc kubenswrapper[4680]: E0217 17:58:58.594263 4680 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 17:58:58 crc kubenswrapper[4680]: E0217 17:58:58.594333 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert podName:a61ed4b7-335a-429e-8c5a-17b2d322c78e nodeName:}" failed. No retries permitted until 2026-02-17 17:58:59.59429817 +0000 UTC m=+869.145196849 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert") pod "infra-operator-controller-manager-ff5c8777-ft44h" (UID: "a61ed4b7-335a-429e-8c5a-17b2d322c78e") : secret "infra-operator-webhook-server-cert" not found Feb 17 17:58:58 crc kubenswrapper[4680]: E0217 17:58:58.594870 4680 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 17:58:58 crc kubenswrapper[4680]: E0217 17:58:58.594919 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert podName:3a909135-a1d3-4a69-875d-74f19de59cf1 nodeName:}" failed. No retries permitted until 2026-02-17 17:58:59.094893599 +0000 UTC m=+868.645792278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert") pod "openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" (UID: "3a909135-a1d3-4a69-875d-74f19de59cf1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.605465 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-rw6wk"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.609300 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7pstg" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.634664 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7zpk\" (UniqueName: \"kubernetes.io/projected/3a909135-a1d3-4a69-875d-74f19de59cf1-kube-api-access-p7zpk\") pod \"openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv\" (UID: \"3a909135-a1d3-4a69-875d-74f19de59cf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.654111 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq2l7\" (UniqueName: \"kubernetes.io/projected/36e854c8-4742-4138-8766-6614fa005bae-kube-api-access-kq2l7\") pod \"ovn-operator-controller-manager-d44cf6b75-htc8b\" (UID: \"36e854c8-4742-4138-8766-6614fa005bae\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-htc8b" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.662139 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-vgwfn"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.666554 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-vgwfn" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.674425 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-mbsgf" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.675332 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-vgwfn"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.685863 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-t8p47"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.687228 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-t8p47" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.700952 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxj42\" (UniqueName: \"kubernetes.io/projected/41d9cacc-1317-479d-99ff-8eb7e6f3d6b6-kube-api-access-rxj42\") pod \"swift-operator-controller-manager-68f46476f-d5n5g\" (UID: \"41d9cacc-1317-479d-99ff-8eb7e6f3d6b6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-d5n5g" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.701003 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfq5b\" (UniqueName: \"kubernetes.io/projected/41c69a9d-0d86-4088-9490-262b3eff9441-kube-api-access-nfq5b\") pod \"telemetry-operator-controller-manager-7f45b4ff68-6628n\" (UID: \"41c69a9d-0d86-4088-9490-262b3eff9441\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6628n" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.701114 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcg9j\" (UniqueName: \"kubernetes.io/projected/eeb4f8c1-4744-4e7f-a8a4-a2deb95f4b45-kube-api-access-rcg9j\") pod \"placement-operator-controller-manager-8497b45c89-rw6wk\" (UID: \"eeb4f8c1-4744-4e7f-a8a4-a2deb95f4b45\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rw6wk" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.703142 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-n5tnd" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.718089 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-t8p47"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.745903 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxj42\" (UniqueName: \"kubernetes.io/projected/41d9cacc-1317-479d-99ff-8eb7e6f3d6b6-kube-api-access-rxj42\") pod \"swift-operator-controller-manager-68f46476f-d5n5g\" (UID: \"41d9cacc-1317-479d-99ff-8eb7e6f3d6b6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-d5n5g" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.760986 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcg9j\" (UniqueName: \"kubernetes.io/projected/eeb4f8c1-4744-4e7f-a8a4-a2deb95f4b45-kube-api-access-rcg9j\") pod \"placement-operator-controller-manager-8497b45c89-rw6wk\" (UID: \"eeb4f8c1-4744-4e7f-a8a4-a2deb95f4b45\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rw6wk" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.773856 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.774736 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.778856 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-7jqgg" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.779036 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.779157 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.805068 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfq5b\" (UniqueName: \"kubernetes.io/projected/41c69a9d-0d86-4088-9490-262b3eff9441-kube-api-access-nfq5b\") pod \"telemetry-operator-controller-manager-7f45b4ff68-6628n\" (UID: \"41c69a9d-0d86-4088-9490-262b3eff9441\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6628n" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.805129 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckql6\" (UniqueName: \"kubernetes.io/projected/6f60616b-3009-4961-af69-11a3d3e42f4b-kube-api-access-ckql6\") pod \"test-operator-controller-manager-7866795846-vgwfn\" (UID: \"6f60616b-3009-4961-af69-11a3d3e42f4b\") " pod="openstack-operators/test-operator-controller-manager-7866795846-vgwfn" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.805160 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9t2x\" (UniqueName: \"kubernetes.io/projected/827a72ee-4516-4fe2-866a-a87124fe7439-kube-api-access-q9t2x\") pod \"watcher-operator-controller-manager-5db88f68c-t8p47\" (UID: \"827a72ee-4516-4fe2-866a-a87124fe7439\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-t8p47" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.828156 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.846026 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-htc8b" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.855338 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6k4kw"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.856187 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6k4kw" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.860957 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-57k24" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.882904 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6k4kw"] Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.896412 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-d5n5g" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.907064 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rw6wk" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.908561 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.908598 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj2fh\" (UniqueName: \"kubernetes.io/projected/1982baea-3b82-4376-ab68-6559095032d5-kube-api-access-rj2fh\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.908641 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn2vm\" (UniqueName: \"kubernetes.io/projected/7e8462ff-c9e3-4cff-bd4b-7f94d8acb8b1-kube-api-access-fn2vm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6k4kw\" (UID: \"7e8462ff-c9e3-4cff-bd4b-7f94d8acb8b1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6k4kw" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.908751 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckql6\" (UniqueName: \"kubernetes.io/projected/6f60616b-3009-4961-af69-11a3d3e42f4b-kube-api-access-ckql6\") pod \"test-operator-controller-manager-7866795846-vgwfn\" (UID: \"6f60616b-3009-4961-af69-11a3d3e42f4b\") " pod="openstack-operators/test-operator-controller-manager-7866795846-vgwfn" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.908816 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9t2x\" (UniqueName: \"kubernetes.io/projected/827a72ee-4516-4fe2-866a-a87124fe7439-kube-api-access-q9t2x\") pod \"watcher-operator-controller-manager-5db88f68c-t8p47\" (UID: \"827a72ee-4516-4fe2-866a-a87124fe7439\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-t8p47" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.908947 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.949119 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckql6\" (UniqueName: \"kubernetes.io/projected/6f60616b-3009-4961-af69-11a3d3e42f4b-kube-api-access-ckql6\") pod \"test-operator-controller-manager-7866795846-vgwfn\" (UID: \"6f60616b-3009-4961-af69-11a3d3e42f4b\") " pod="openstack-operators/test-operator-controller-manager-7866795846-vgwfn" Feb 17 17:58:58 crc kubenswrapper[4680]: I0217 17:58:58.953800 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfq5b\" (UniqueName: \"kubernetes.io/projected/41c69a9d-0d86-4088-9490-262b3eff9441-kube-api-access-nfq5b\") pod \"telemetry-operator-controller-manager-7f45b4ff68-6628n\" (UID: \"41c69a9d-0d86-4088-9490-262b3eff9441\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6628n" Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:58.999284 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9t2x\" (UniqueName: \"kubernetes.io/projected/827a72ee-4516-4fe2-866a-a87124fe7439-kube-api-access-q9t2x\") pod \"watcher-operator-controller-manager-5db88f68c-t8p47\" (UID: \"827a72ee-4516-4fe2-866a-a87124fe7439\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-t8p47" Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.009704 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.009769 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.009793 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj2fh\" (UniqueName: \"kubernetes.io/projected/1982baea-3b82-4376-ab68-6559095032d5-kube-api-access-rj2fh\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.009817 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn2vm\" (UniqueName: \"kubernetes.io/projected/7e8462ff-c9e3-4cff-bd4b-7f94d8acb8b1-kube-api-access-fn2vm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6k4kw\" (UID: \"7e8462ff-c9e3-4cff-bd4b-7f94d8acb8b1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6k4kw" Feb 17 17:58:59 crc kubenswrapper[4680]: E0217 17:58:59.010239 4680 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 17:58:59 crc kubenswrapper[4680]: E0217 17:58:59.010276 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs podName:1982baea-3b82-4376-ab68-6559095032d5 nodeName:}" failed. No retries permitted until 2026-02-17 17:58:59.510264543 +0000 UTC m=+869.061163222 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs") pod "openstack-operator-controller-manager-b96b9dfc9-r9bnq" (UID: "1982baea-3b82-4376-ab68-6559095032d5") : secret "metrics-server-cert" not found Feb 17 17:58:59 crc kubenswrapper[4680]: E0217 17:58:59.010425 4680 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 17:58:59 crc kubenswrapper[4680]: E0217 17:58:59.010448 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs podName:1982baea-3b82-4376-ab68-6559095032d5 nodeName:}" failed. No retries permitted until 2026-02-17 17:58:59.510441539 +0000 UTC m=+869.061340208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs") pod "openstack-operator-controller-manager-b96b9dfc9-r9bnq" (UID: "1982baea-3b82-4376-ab68-6559095032d5") : secret "webhook-server-cert" not found Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.025727 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-9qzbv"] Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.051076 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-vgwfn" Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.052258 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn2vm\" (UniqueName: \"kubernetes.io/projected/7e8462ff-c9e3-4cff-bd4b-7f94d8acb8b1-kube-api-access-fn2vm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6k4kw\" (UID: \"7e8462ff-c9e3-4cff-bd4b-7f94d8acb8b1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6k4kw" Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.053405 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj2fh\" (UniqueName: \"kubernetes.io/projected/1982baea-3b82-4376-ab68-6559095032d5-kube-api-access-rj2fh\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.061681 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-t8p47" Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.069981 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6k4kw" Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.111181 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert\") pod \"openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv\" (UID: \"3a909135-a1d3-4a69-875d-74f19de59cf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" Feb 17 17:58:59 crc kubenswrapper[4680]: E0217 17:58:59.112165 4680 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 17:58:59 crc kubenswrapper[4680]: E0217 17:58:59.112214 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert podName:3a909135-a1d3-4a69-875d-74f19de59cf1 nodeName:}" failed. No retries permitted until 2026-02-17 17:59:00.112199218 +0000 UTC m=+869.663097897 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert") pod "openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" (UID: "3a909135-a1d3-4a69-875d-74f19de59cf1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.221988 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6628n" Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.364498 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-nbwj4"] Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.439870 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wpzsn"] Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.464648 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-64dzx"] Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.521128 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.521684 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:58:59 crc kubenswrapper[4680]: E0217 17:58:59.521867 4680 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 17:58:59 crc kubenswrapper[4680]: E0217 17:58:59.521943 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs podName:1982baea-3b82-4376-ab68-6559095032d5 nodeName:}" failed. No retries permitted until 2026-02-17 17:59:00.521924234 +0000 UTC m=+870.072822913 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs") pod "openstack-operator-controller-manager-b96b9dfc9-r9bnq" (UID: "1982baea-3b82-4376-ab68-6559095032d5") : secret "webhook-server-cert" not found Feb 17 17:58:59 crc kubenswrapper[4680]: E0217 17:58:59.522457 4680 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 17:58:59 crc kubenswrapper[4680]: E0217 17:58:59.522492 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs podName:1982baea-3b82-4376-ab68-6559095032d5 nodeName:}" failed. No retries permitted until 2026-02-17 17:59:00.522481382 +0000 UTC m=+870.073380061 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs") pod "openstack-operator-controller-manager-b96b9dfc9-r9bnq" (UID: "1982baea-3b82-4376-ab68-6559095032d5") : secret "metrics-server-cert" not found Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.626608 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert\") pod \"infra-operator-controller-manager-ff5c8777-ft44h\" (UID: \"a61ed4b7-335a-429e-8c5a-17b2d322c78e\") " pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" Feb 17 17:58:59 crc kubenswrapper[4680]: E0217 17:58:59.626760 4680 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 17:58:59 crc kubenswrapper[4680]: E0217 17:58:59.626805 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert podName:a61ed4b7-335a-429e-8c5a-17b2d322c78e nodeName:}" failed. No retries permitted until 2026-02-17 17:59:01.626791742 +0000 UTC m=+871.177690421 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert") pod "infra-operator-controller-manager-ff5c8777-ft44h" (UID: "a61ed4b7-335a-429e-8c5a-17b2d322c78e") : secret "infra-operator-webhook-server-cert" not found Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.900826 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-jpnmh"] Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.923303 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fnnbk"] Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.940066 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-zq66h"] Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.955753 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-n45mv"] Feb 17 17:58:59 crc kubenswrapper[4680]: W0217 17:58:59.965914 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25257448_9b91_4b64_9279_e3e51e752019.slice/crio-ba6df5184edbff14a8f89f8b7c91d6d18f1b2ad3ed712010e550a6825f30b20d WatchSource:0}: Error finding container ba6df5184edbff14a8f89f8b7c91d6d18f1b2ad3ed712010e550a6825f30b20d: Status 404 returned error can't find the container with id ba6df5184edbff14a8f89f8b7c91d6d18f1b2ad3ed712010e550a6825f30b20d Feb 17 17:58:59 crc kubenswrapper[4680]: W0217 17:58:59.967603 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a32bda7_58e5_494b_af42_d255934537ac.slice/crio-8f2b84c87d8e2cafde780535757593b54cab8c74aef7b89d28f128a6371896d3 WatchSource:0}: Error finding container 8f2b84c87d8e2cafde780535757593b54cab8c74aef7b89d28f128a6371896d3: Status 404 returned error can't find the container with id 8f2b84c87d8e2cafde780535757593b54cab8c74aef7b89d28f128a6371896d3 Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.968183 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-mqdqj"] Feb 17 17:58:59 crc kubenswrapper[4680]: I0217 17:58:59.995506 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-xqwqr"] Feb 17 17:59:00 crc kubenswrapper[4680]: W0217 17:59:00.018875 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44afd1e8_ecdc_4854_bafa_d2bdabe26c5b.slice/crio-e09d77199cc05d65821495e7b8033a60772180e716d42efe2a9f620b6dad9404 WatchSource:0}: Error finding container e09d77199cc05d65821495e7b8033a60772180e716d42efe2a9f620b6dad9404: Status 404 returned error can't find the container with id e09d77199cc05d65821495e7b8033a60772180e716d42efe2a9f620b6dad9404 Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.066257 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wpzsn" event={"ID":"85f8c95f-eecb-465c-be1f-c311edcbe597","Type":"ContainerStarted","Data":"bf6bd1ff897e878bbfe86812a64750d2efcaaee28593a79a559aa00d66753289"} Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.067807 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqdqj" event={"ID":"dd0b1acf-7e52-47f1-9fa3-c82be1b7ce8f","Type":"ContainerStarted","Data":"b7770af1ec492420bc1e568da29eae8088d1be659641ee693797dafab8c1b395"} Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.071502 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fnnbk" event={"ID":"25257448-9b91-4b64-9279-e3e51e752019","Type":"ContainerStarted","Data":"ba6df5184edbff14a8f89f8b7c91d6d18f1b2ad3ed712010e550a6825f30b20d"} Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.074089 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nbwj4" event={"ID":"7030ebc6-bc71-4388-97c3-14403919b886","Type":"ContainerStarted","Data":"ed750b58a8fa4176c645ca0f5ec31133ef1ccb50b85b03236031f5c2a64787ca"} Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.091308 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jpnmh" event={"ID":"73c67b77-ecaf-4925-aba5-855dc368e729","Type":"ContainerStarted","Data":"9888f5d93eb93bd47daaf6a3aadc51f2efa7739e858fdade669c859b0ed4adfe"} Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.094173 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-9qzbv" event={"ID":"c4d30f2e-a54a-43ef-923f-103f4e0c54d5","Type":"ContainerStarted","Data":"88f1cb7cc30e09a67d9e900a4951701fb2f4e0031464ec592c49535a5d4824c6"} Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.100227 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64dzx" event={"ID":"5e432327-20d1-4688-8dda-173b62e27110","Type":"ContainerStarted","Data":"1966742d249246fe6206290506947bb5503542a043ddcaec907f5f8f63044bcc"} Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.103972 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-zq66h" event={"ID":"3a32bda7-58e5-494b-af42-d255934537ac","Type":"ContainerStarted","Data":"8f2b84c87d8e2cafde780535757593b54cab8c74aef7b89d28f128a6371896d3"} Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.107183 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-n45mv" event={"ID":"44afd1e8-ecdc-4854-bafa-d2bdabe26c5b","Type":"ContainerStarted","Data":"e09d77199cc05d65821495e7b8033a60772180e716d42efe2a9f620b6dad9404"} Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.134610 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert\") pod \"openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv\" (UID: \"3a909135-a1d3-4a69-875d-74f19de59cf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" Feb 17 17:59:00 crc kubenswrapper[4680]: E0217 17:59:00.134859 4680 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 17:59:00 crc kubenswrapper[4680]: E0217 17:59:00.134916 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert podName:3a909135-a1d3-4a69-875d-74f19de59cf1 nodeName:}" failed. No retries permitted until 2026-02-17 17:59:02.134898242 +0000 UTC m=+871.685796921 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert") pod "openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" (UID: "3a909135-a1d3-4a69-875d-74f19de59cf1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.325026 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-rw6wk"] Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.361194 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6k4kw"] Feb 17 17:59:00 crc kubenswrapper[4680]: W0217 17:59:00.369166 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod827a72ee_4516_4fe2_866a_a87124fe7439.slice/crio-0400a4695993d0ad4d28baddda91ef206c31378da67bb5f16b147ef68ea710d3 WatchSource:0}: Error finding container 0400a4695993d0ad4d28baddda91ef206c31378da67bb5f16b147ef68ea710d3: Status 404 returned error can't find the container with id 0400a4695993d0ad4d28baddda91ef206c31378da67bb5f16b147ef68ea710d3 Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.379632 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-bxxqw"] Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.387621 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-7pstg"] Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.393385 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-vgwfn"] Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.397719 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-t8p47"] Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.402669 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6628n"] Feb 17 17:59:00 crc kubenswrapper[4680]: W0217 17:59:00.406440 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5663f682_3b3a_4061_8d4a_377a103c0bd4.slice/crio-2cf3a359040d154d59ee9c7c35d6f593761520f1efa703f78526894c28e7e12e WatchSource:0}: Error finding container 2cf3a359040d154d59ee9c7c35d6f593761520f1efa703f78526894c28e7e12e: Status 404 returned error can't find the container with id 2cf3a359040d154d59ee9c7c35d6f593761520f1efa703f78526894c28e7e12e Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.407927 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-th6kc"] Feb 17 17:59:00 crc kubenswrapper[4680]: W0217 17:59:00.414376 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36e854c8_4742_4138_8766_6614fa005bae.slice/crio-5b0829489b47ed60175a98dfef277896eb2acc9989652a735e6d38f0a5d865e0 WatchSource:0}: Error finding container 5b0829489b47ed60175a98dfef277896eb2acc9989652a735e6d38f0a5d865e0: Status 404 returned error can't find the container with id 5b0829489b47ed60175a98dfef277896eb2acc9989652a735e6d38f0a5d865e0 Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.423135 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-htc8b"] Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.427998 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-d5n5g"] Feb 17 17:59:00 crc kubenswrapper[4680]: E0217 17:59:00.434007 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k5h7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-bxxqw_openstack-operators(01a45563-cabc-4dbc-80cb-49b274ab96cd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 17:59:00 crc kubenswrapper[4680]: E0217 17:59:00.434369 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rcg9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-rw6wk_openstack-operators(eeb4f8c1-4744-4e7f-a8a4-a2deb95f4b45): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 17:59:00 crc kubenswrapper[4680]: E0217 17:59:00.434125 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r825r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-th6kc_openstack-operators(27bd44a3-f6bc-462d-92a4-53287bd71b8f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 17:59:00 crc kubenswrapper[4680]: E0217 17:59:00.435966 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-th6kc" podUID="27bd44a3-f6bc-462d-92a4-53287bd71b8f" Feb 17 17:59:00 crc kubenswrapper[4680]: E0217 17:59:00.436027 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bxxqw" podUID="01a45563-cabc-4dbc-80cb-49b274ab96cd" Feb 17 17:59:00 crc kubenswrapper[4680]: E0217 17:59:00.436057 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rw6wk" podUID="eeb4f8c1-4744-4e7f-a8a4-a2deb95f4b45" Feb 17 17:59:00 crc kubenswrapper[4680]: E0217 17:59:00.446576 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rxj42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-d5n5g_openstack-operators(41d9cacc-1317-479d-99ff-8eb7e6f3d6b6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 17:59:00 crc kubenswrapper[4680]: E0217 17:59:00.447292 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ckql6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-vgwfn_openstack-operators(6f60616b-3009-4961-af69-11a3d3e42f4b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 17:59:00 crc kubenswrapper[4680]: E0217 17:59:00.449886 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-vgwfn" podUID="6f60616b-3009-4961-af69-11a3d3e42f4b" Feb 17 17:59:00 crc kubenswrapper[4680]: E0217 17:59:00.449948 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-d5n5g" podUID="41d9cacc-1317-479d-99ff-8eb7e6f3d6b6" Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.550650 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:59:00 crc kubenswrapper[4680]: I0217 17:59:00.550717 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:59:00 crc kubenswrapper[4680]: E0217 17:59:00.550796 4680 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 17:59:00 crc kubenswrapper[4680]: E0217 17:59:00.550822 4680 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 17:59:00 crc kubenswrapper[4680]: E0217 17:59:00.550881 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs podName:1982baea-3b82-4376-ab68-6559095032d5 nodeName:}" failed. No retries permitted until 2026-02-17 17:59:02.550847114 +0000 UTC m=+872.101745793 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs") pod "openstack-operator-controller-manager-b96b9dfc9-r9bnq" (UID: "1982baea-3b82-4376-ab68-6559095032d5") : secret "webhook-server-cert" not found Feb 17 17:59:00 crc kubenswrapper[4680]: E0217 17:59:00.550899 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs podName:1982baea-3b82-4376-ab68-6559095032d5 nodeName:}" failed. No retries permitted until 2026-02-17 17:59:02.550893235 +0000 UTC m=+872.101791904 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs") pod "openstack-operator-controller-manager-b96b9dfc9-r9bnq" (UID: "1982baea-3b82-4376-ab68-6559095032d5") : secret "metrics-server-cert" not found Feb 17 17:59:01 crc kubenswrapper[4680]: I0217 17:59:01.142670 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-d5n5g" event={"ID":"41d9cacc-1317-479d-99ff-8eb7e6f3d6b6","Type":"ContainerStarted","Data":"e5e5b62a767047ac7c68f6b71b6bb99a84e00e8d40c2661fbf3834864ca8d73c"} Feb 17 17:59:01 crc kubenswrapper[4680]: E0217 17:59:01.151246 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-d5n5g" podUID="41d9cacc-1317-479d-99ff-8eb7e6f3d6b6" Feb 17 17:59:01 crc kubenswrapper[4680]: I0217 17:59:01.169503 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-htc8b" event={"ID":"36e854c8-4742-4138-8766-6614fa005bae","Type":"ContainerStarted","Data":"5b0829489b47ed60175a98dfef277896eb2acc9989652a735e6d38f0a5d865e0"} Feb 17 17:59:01 crc kubenswrapper[4680]: I0217 17:59:01.185000 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rw6wk" event={"ID":"eeb4f8c1-4744-4e7f-a8a4-a2deb95f4b45","Type":"ContainerStarted","Data":"f8826316b9d7a6cc69fc64616ceea52e2a4786a042c95bdd9fcea4d692672260"} Feb 17 17:59:01 crc kubenswrapper[4680]: I0217 17:59:01.205839 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6628n" event={"ID":"41c69a9d-0d86-4088-9490-262b3eff9441","Type":"ContainerStarted","Data":"79fc8633171ea04b04df8cab661d6d0cbee5c3c8cbe8f3b9a386fe18b401dc84"} Feb 17 17:59:01 crc kubenswrapper[4680]: I0217 17:59:01.258034 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7pstg" event={"ID":"5663f682-3b3a-4061-8d4a-377a103c0bd4","Type":"ContainerStarted","Data":"2cf3a359040d154d59ee9c7c35d6f593761520f1efa703f78526894c28e7e12e"} Feb 17 17:59:01 crc kubenswrapper[4680]: E0217 17:59:01.258061 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rw6wk" podUID="eeb4f8c1-4744-4e7f-a8a4-a2deb95f4b45" Feb 17 17:59:01 crc kubenswrapper[4680]: I0217 17:59:01.271092 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-th6kc" event={"ID":"27bd44a3-f6bc-462d-92a4-53287bd71b8f","Type":"ContainerStarted","Data":"db435785f0c9181377e58d204997d8209bccf888f558b7664967facb9850b620"} Feb 17 17:59:01 crc kubenswrapper[4680]: E0217 17:59:01.277422 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-th6kc" podUID="27bd44a3-f6bc-462d-92a4-53287bd71b8f" Feb 17 17:59:01 crc kubenswrapper[4680]: I0217 17:59:01.278155 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-t8p47" event={"ID":"827a72ee-4516-4fe2-866a-a87124fe7439","Type":"ContainerStarted","Data":"0400a4695993d0ad4d28baddda91ef206c31378da67bb5f16b147ef68ea710d3"} Feb 17 17:59:01 crc kubenswrapper[4680]: I0217 17:59:01.286884 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bxxqw" event={"ID":"01a45563-cabc-4dbc-80cb-49b274ab96cd","Type":"ContainerStarted","Data":"38f0a66c55ec79a31c5625f7eeda2100d80922f866fa89efdde7dc0e1510a5ec"} Feb 17 17:59:01 crc kubenswrapper[4680]: E0217 17:59:01.289012 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bxxqw" podUID="01a45563-cabc-4dbc-80cb-49b274ab96cd" Feb 17 17:59:01 crc kubenswrapper[4680]: I0217 17:59:01.290790 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xqwqr" event={"ID":"09667f14-d7ef-4502-9880-b0f14e2247b8","Type":"ContainerStarted","Data":"e38e0eb3c8ac2ff862fba7e430caf1041752f234fd3012f3429db355a820dda1"} Feb 17 17:59:01 crc kubenswrapper[4680]: I0217 17:59:01.305688 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6k4kw" event={"ID":"7e8462ff-c9e3-4cff-bd4b-7f94d8acb8b1","Type":"ContainerStarted","Data":"27ed9c73012512bcb0b470275b332e3c4fc800890a271cef3251cd69e54c91b8"} Feb 17 17:59:01 crc kubenswrapper[4680]: I0217 17:59:01.311399 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-vgwfn" event={"ID":"6f60616b-3009-4961-af69-11a3d3e42f4b","Type":"ContainerStarted","Data":"f2fe397623fad9112a4483d6d336e44a171c8f302e34cbf1812823121264f3f4"} Feb 17 17:59:01 crc kubenswrapper[4680]: E0217 17:59:01.313595 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-vgwfn" podUID="6f60616b-3009-4961-af69-11a3d3e42f4b" Feb 17 17:59:01 crc kubenswrapper[4680]: I0217 17:59:01.674857 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert\") pod \"infra-operator-controller-manager-ff5c8777-ft44h\" (UID: \"a61ed4b7-335a-429e-8c5a-17b2d322c78e\") " pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" Feb 17 17:59:01 crc kubenswrapper[4680]: E0217 17:59:01.675070 4680 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 17:59:01 crc kubenswrapper[4680]: E0217 17:59:01.675118 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert podName:a61ed4b7-335a-429e-8c5a-17b2d322c78e nodeName:}" failed. No retries permitted until 2026-02-17 17:59:05.675104791 +0000 UTC m=+875.226003470 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert") pod "infra-operator-controller-manager-ff5c8777-ft44h" (UID: "a61ed4b7-335a-429e-8c5a-17b2d322c78e") : secret "infra-operator-webhook-server-cert" not found Feb 17 17:59:02 crc kubenswrapper[4680]: I0217 17:59:02.190689 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert\") pod \"openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv\" (UID: \"3a909135-a1d3-4a69-875d-74f19de59cf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" Feb 17 17:59:02 crc kubenswrapper[4680]: E0217 17:59:02.190925 4680 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 17:59:02 crc kubenswrapper[4680]: E0217 17:59:02.191004 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert podName:3a909135-a1d3-4a69-875d-74f19de59cf1 nodeName:}" failed. No retries permitted until 2026-02-17 17:59:06.190986016 +0000 UTC m=+875.741884695 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert") pod "openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" (UID: "3a909135-a1d3-4a69-875d-74f19de59cf1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 17:59:02 crc kubenswrapper[4680]: E0217 17:59:02.328248 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-vgwfn" podUID="6f60616b-3009-4961-af69-11a3d3e42f4b" Feb 17 17:59:02 crc kubenswrapper[4680]: E0217 17:59:02.328260 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-d5n5g" podUID="41d9cacc-1317-479d-99ff-8eb7e6f3d6b6" Feb 17 17:59:02 crc kubenswrapper[4680]: E0217 17:59:02.328419 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rw6wk" podUID="eeb4f8c1-4744-4e7f-a8a4-a2deb95f4b45" Feb 17 17:59:02 crc kubenswrapper[4680]: E0217 17:59:02.328673 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bxxqw" podUID="01a45563-cabc-4dbc-80cb-49b274ab96cd" Feb 17 17:59:02 crc kubenswrapper[4680]: E0217 17:59:02.331242 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-th6kc" podUID="27bd44a3-f6bc-462d-92a4-53287bd71b8f" Feb 17 17:59:02 crc kubenswrapper[4680]: I0217 17:59:02.638372 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:59:02 crc kubenswrapper[4680]: I0217 17:59:02.638480 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:59:02 crc kubenswrapper[4680]: E0217 17:59:02.638662 4680 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 17:59:02 crc kubenswrapper[4680]: E0217 17:59:02.638726 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs podName:1982baea-3b82-4376-ab68-6559095032d5 nodeName:}" failed. No retries permitted until 2026-02-17 17:59:06.638704014 +0000 UTC m=+876.189602693 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs") pod "openstack-operator-controller-manager-b96b9dfc9-r9bnq" (UID: "1982baea-3b82-4376-ab68-6559095032d5") : secret "webhook-server-cert" not found Feb 17 17:59:02 crc kubenswrapper[4680]: E0217 17:59:02.640422 4680 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 17:59:02 crc kubenswrapper[4680]: E0217 17:59:02.640596 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs podName:1982baea-3b82-4376-ab68-6559095032d5 nodeName:}" failed. No retries permitted until 2026-02-17 17:59:06.640574532 +0000 UTC m=+876.191473211 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs") pod "openstack-operator-controller-manager-b96b9dfc9-r9bnq" (UID: "1982baea-3b82-4376-ab68-6559095032d5") : secret "metrics-server-cert" not found Feb 17 17:59:03 crc kubenswrapper[4680]: I0217 17:59:03.676973 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zxbtl"] Feb 17 17:59:03 crc kubenswrapper[4680]: I0217 17:59:03.681745 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxbtl" Feb 17 17:59:03 crc kubenswrapper[4680]: I0217 17:59:03.722456 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zxbtl"] Feb 17 17:59:03 crc kubenswrapper[4680]: I0217 17:59:03.760875 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q58bw\" (UniqueName: \"kubernetes.io/projected/81f62af7-df09-4294-8d1c-d023b33c3384-kube-api-access-q58bw\") pod \"certified-operators-zxbtl\" (UID: \"81f62af7-df09-4294-8d1c-d023b33c3384\") " pod="openshift-marketplace/certified-operators-zxbtl" Feb 17 17:59:03 crc kubenswrapper[4680]: I0217 17:59:03.761055 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f62af7-df09-4294-8d1c-d023b33c3384-catalog-content\") pod \"certified-operators-zxbtl\" (UID: \"81f62af7-df09-4294-8d1c-d023b33c3384\") " pod="openshift-marketplace/certified-operators-zxbtl" Feb 17 17:59:03 crc kubenswrapper[4680]: I0217 17:59:03.761129 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f62af7-df09-4294-8d1c-d023b33c3384-utilities\") pod \"certified-operators-zxbtl\" (UID: \"81f62af7-df09-4294-8d1c-d023b33c3384\") " pod="openshift-marketplace/certified-operators-zxbtl" Feb 17 17:59:03 crc kubenswrapper[4680]: I0217 17:59:03.863011 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f62af7-df09-4294-8d1c-d023b33c3384-catalog-content\") pod \"certified-operators-zxbtl\" (UID: \"81f62af7-df09-4294-8d1c-d023b33c3384\") " pod="openshift-marketplace/certified-operators-zxbtl" Feb 17 17:59:03 crc kubenswrapper[4680]: I0217 17:59:03.863078 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f62af7-df09-4294-8d1c-d023b33c3384-utilities\") pod \"certified-operators-zxbtl\" (UID: \"81f62af7-df09-4294-8d1c-d023b33c3384\") " pod="openshift-marketplace/certified-operators-zxbtl" Feb 17 17:59:03 crc kubenswrapper[4680]: I0217 17:59:03.863136 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q58bw\" (UniqueName: \"kubernetes.io/projected/81f62af7-df09-4294-8d1c-d023b33c3384-kube-api-access-q58bw\") pod \"certified-operators-zxbtl\" (UID: \"81f62af7-df09-4294-8d1c-d023b33c3384\") " pod="openshift-marketplace/certified-operators-zxbtl" Feb 17 17:59:03 crc kubenswrapper[4680]: I0217 17:59:03.863989 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f62af7-df09-4294-8d1c-d023b33c3384-utilities\") pod \"certified-operators-zxbtl\" (UID: \"81f62af7-df09-4294-8d1c-d023b33c3384\") " pod="openshift-marketplace/certified-operators-zxbtl" Feb 17 17:59:03 crc kubenswrapper[4680]: I0217 17:59:03.864010 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f62af7-df09-4294-8d1c-d023b33c3384-catalog-content\") pod \"certified-operators-zxbtl\" (UID: \"81f62af7-df09-4294-8d1c-d023b33c3384\") " pod="openshift-marketplace/certified-operators-zxbtl" Feb 17 17:59:03 crc kubenswrapper[4680]: I0217 17:59:03.889453 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q58bw\" (UniqueName: \"kubernetes.io/projected/81f62af7-df09-4294-8d1c-d023b33c3384-kube-api-access-q58bw\") pod \"certified-operators-zxbtl\" (UID: \"81f62af7-df09-4294-8d1c-d023b33c3384\") " pod="openshift-marketplace/certified-operators-zxbtl" Feb 17 17:59:04 crc kubenswrapper[4680]: I0217 17:59:04.013196 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxbtl" Feb 17 17:59:05 crc kubenswrapper[4680]: I0217 17:59:05.588903 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:59:05 crc kubenswrapper[4680]: I0217 17:59:05.590350 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:59:05 crc kubenswrapper[4680]: I0217 17:59:05.696001 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert\") pod \"infra-operator-controller-manager-ff5c8777-ft44h\" (UID: \"a61ed4b7-335a-429e-8c5a-17b2d322c78e\") " pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" Feb 17 17:59:05 crc kubenswrapper[4680]: E0217 17:59:05.696180 4680 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 17:59:05 crc kubenswrapper[4680]: E0217 17:59:05.696242 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert podName:a61ed4b7-335a-429e-8c5a-17b2d322c78e nodeName:}" failed. No retries permitted until 2026-02-17 17:59:13.696222427 +0000 UTC m=+883.247121106 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert") pod "infra-operator-controller-manager-ff5c8777-ft44h" (UID: "a61ed4b7-335a-429e-8c5a-17b2d322c78e") : secret "infra-operator-webhook-server-cert" not found Feb 17 17:59:06 crc kubenswrapper[4680]: I0217 17:59:06.202151 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert\") pod \"openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv\" (UID: \"3a909135-a1d3-4a69-875d-74f19de59cf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" Feb 17 17:59:06 crc kubenswrapper[4680]: E0217 17:59:06.202293 4680 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 17:59:06 crc kubenswrapper[4680]: E0217 17:59:06.202377 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert podName:3a909135-a1d3-4a69-875d-74f19de59cf1 nodeName:}" failed. No retries permitted until 2026-02-17 17:59:14.202359423 +0000 UTC m=+883.753258102 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert") pod "openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" (UID: "3a909135-a1d3-4a69-875d-74f19de59cf1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 17:59:06 crc kubenswrapper[4680]: I0217 17:59:06.709226 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:59:06 crc kubenswrapper[4680]: I0217 17:59:06.709311 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:59:06 crc kubenswrapper[4680]: E0217 17:59:06.709668 4680 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 17:59:06 crc kubenswrapper[4680]: E0217 17:59:06.709686 4680 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 17:59:06 crc kubenswrapper[4680]: E0217 17:59:06.709734 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs podName:1982baea-3b82-4376-ab68-6559095032d5 nodeName:}" failed. No retries permitted until 2026-02-17 17:59:14.709715308 +0000 UTC m=+884.260613997 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs") pod "openstack-operator-controller-manager-b96b9dfc9-r9bnq" (UID: "1982baea-3b82-4376-ab68-6559095032d5") : secret "webhook-server-cert" not found Feb 17 17:59:06 crc kubenswrapper[4680]: E0217 17:59:06.709772 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs podName:1982baea-3b82-4376-ab68-6559095032d5 nodeName:}" failed. No retries permitted until 2026-02-17 17:59:14.709748389 +0000 UTC m=+884.260647058 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs") pod "openstack-operator-controller-manager-b96b9dfc9-r9bnq" (UID: "1982baea-3b82-4376-ab68-6559095032d5") : secret "metrics-server-cert" not found Feb 17 17:59:08 crc kubenswrapper[4680]: I0217 17:59:08.561305 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b7rbb"] Feb 17 17:59:08 crc kubenswrapper[4680]: I0217 17:59:08.564882 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7rbb" Feb 17 17:59:08 crc kubenswrapper[4680]: I0217 17:59:08.571810 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b7rbb"] Feb 17 17:59:08 crc kubenswrapper[4680]: I0217 17:59:08.740203 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/917e64a2-e2d5-4871-8774-82861f1ac0fe-catalog-content\") pod \"redhat-operators-b7rbb\" (UID: \"917e64a2-e2d5-4871-8774-82861f1ac0fe\") " pod="openshift-marketplace/redhat-operators-b7rbb" Feb 17 17:59:08 crc kubenswrapper[4680]: I0217 17:59:08.740276 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/917e64a2-e2d5-4871-8774-82861f1ac0fe-utilities\") pod \"redhat-operators-b7rbb\" (UID: \"917e64a2-e2d5-4871-8774-82861f1ac0fe\") " pod="openshift-marketplace/redhat-operators-b7rbb" Feb 17 17:59:08 crc kubenswrapper[4680]: I0217 17:59:08.740379 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg5x7\" (UniqueName: \"kubernetes.io/projected/917e64a2-e2d5-4871-8774-82861f1ac0fe-kube-api-access-zg5x7\") pod \"redhat-operators-b7rbb\" (UID: \"917e64a2-e2d5-4871-8774-82861f1ac0fe\") " pod="openshift-marketplace/redhat-operators-b7rbb" Feb 17 17:59:08 crc kubenswrapper[4680]: I0217 17:59:08.842437 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/917e64a2-e2d5-4871-8774-82861f1ac0fe-catalog-content\") pod \"redhat-operators-b7rbb\" (UID: \"917e64a2-e2d5-4871-8774-82861f1ac0fe\") " pod="openshift-marketplace/redhat-operators-b7rbb" Feb 17 17:59:08 crc kubenswrapper[4680]: I0217 17:59:08.842516 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/917e64a2-e2d5-4871-8774-82861f1ac0fe-utilities\") pod \"redhat-operators-b7rbb\" (UID: \"917e64a2-e2d5-4871-8774-82861f1ac0fe\") " pod="openshift-marketplace/redhat-operators-b7rbb" Feb 17 17:59:08 crc kubenswrapper[4680]: I0217 17:59:08.842625 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg5x7\" (UniqueName: \"kubernetes.io/projected/917e64a2-e2d5-4871-8774-82861f1ac0fe-kube-api-access-zg5x7\") pod \"redhat-operators-b7rbb\" (UID: \"917e64a2-e2d5-4871-8774-82861f1ac0fe\") " pod="openshift-marketplace/redhat-operators-b7rbb" Feb 17 17:59:08 crc kubenswrapper[4680]: I0217 17:59:08.843006 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/917e64a2-e2d5-4871-8774-82861f1ac0fe-catalog-content\") pod \"redhat-operators-b7rbb\" (UID: \"917e64a2-e2d5-4871-8774-82861f1ac0fe\") " pod="openshift-marketplace/redhat-operators-b7rbb" Feb 17 17:59:08 crc kubenswrapper[4680]: I0217 17:59:08.843174 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/917e64a2-e2d5-4871-8774-82861f1ac0fe-utilities\") pod \"redhat-operators-b7rbb\" (UID: \"917e64a2-e2d5-4871-8774-82861f1ac0fe\") " pod="openshift-marketplace/redhat-operators-b7rbb" Feb 17 17:59:08 crc kubenswrapper[4680]: I0217 17:59:08.877431 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg5x7\" (UniqueName: \"kubernetes.io/projected/917e64a2-e2d5-4871-8774-82861f1ac0fe-kube-api-access-zg5x7\") pod \"redhat-operators-b7rbb\" (UID: \"917e64a2-e2d5-4871-8774-82861f1ac0fe\") " pod="openshift-marketplace/redhat-operators-b7rbb" Feb 17 17:59:08 crc kubenswrapper[4680]: I0217 17:59:08.901906 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7rbb" Feb 17 17:59:13 crc kubenswrapper[4680]: I0217 17:59:13.705596 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert\") pod \"infra-operator-controller-manager-ff5c8777-ft44h\" (UID: \"a61ed4b7-335a-429e-8c5a-17b2d322c78e\") " pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" Feb 17 17:59:13 crc kubenswrapper[4680]: E0217 17:59:13.705763 4680 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 17:59:13 crc kubenswrapper[4680]: E0217 17:59:13.706220 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert podName:a61ed4b7-335a-429e-8c5a-17b2d322c78e nodeName:}" failed. No retries permitted until 2026-02-17 17:59:29.706204039 +0000 UTC m=+899.257102718 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert") pod "infra-operator-controller-manager-ff5c8777-ft44h" (UID: "a61ed4b7-335a-429e-8c5a-17b2d322c78e") : secret "infra-operator-webhook-server-cert" not found Feb 17 17:59:13 crc kubenswrapper[4680]: E0217 17:59:13.812590 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 17 17:59:13 crc kubenswrapper[4680]: E0217 17:59:13.812835 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zxxd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-64dzx_openstack-operators(5e432327-20d1-4688-8dda-173b62e27110): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 17:59:13 crc kubenswrapper[4680]: E0217 17:59:13.814117 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64dzx" podUID="5e432327-20d1-4688-8dda-173b62e27110" Feb 17 17:59:14 crc kubenswrapper[4680]: I0217 17:59:14.214858 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert\") pod \"openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv\" (UID: \"3a909135-a1d3-4a69-875d-74f19de59cf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" Feb 17 17:59:14 crc kubenswrapper[4680]: I0217 17:59:14.221499 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a909135-a1d3-4a69-875d-74f19de59cf1-cert\") pod \"openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv\" (UID: \"3a909135-a1d3-4a69-875d-74f19de59cf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" Feb 17 17:59:14 crc kubenswrapper[4680]: I0217 17:59:14.427263 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" Feb 17 17:59:14 crc kubenswrapper[4680]: E0217 17:59:14.431805 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64dzx" podUID="5e432327-20d1-4688-8dda-173b62e27110" Feb 17 17:59:14 crc kubenswrapper[4680]: I0217 17:59:14.720730 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:59:14 crc kubenswrapper[4680]: I0217 17:59:14.720811 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:59:14 crc kubenswrapper[4680]: E0217 17:59:14.720975 4680 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 17:59:14 crc kubenswrapper[4680]: E0217 17:59:14.721037 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs podName:1982baea-3b82-4376-ab68-6559095032d5 nodeName:}" failed. No retries permitted until 2026-02-17 17:59:30.721013802 +0000 UTC m=+900.271912501 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs") pod "openstack-operator-controller-manager-b96b9dfc9-r9bnq" (UID: "1982baea-3b82-4376-ab68-6559095032d5") : secret "webhook-server-cert" not found Feb 17 17:59:14 crc kubenswrapper[4680]: I0217 17:59:14.727262 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-metrics-certs\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:59:15 crc kubenswrapper[4680]: E0217 17:59:15.305401 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 17 17:59:15 crc kubenswrapper[4680]: E0217 17:59:15.305598 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gjdlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-wpzsn_openstack-operators(85f8c95f-eecb-465c-be1f-c311edcbe597): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 17:59:15 crc kubenswrapper[4680]: E0217 17:59:15.307362 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wpzsn" podUID="85f8c95f-eecb-465c-be1f-c311edcbe597" Feb 17 17:59:15 crc kubenswrapper[4680]: E0217 17:59:15.438533 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wpzsn" podUID="85f8c95f-eecb-465c-be1f-c311edcbe597" Feb 17 17:59:16 crc kubenswrapper[4680]: E0217 17:59:16.003960 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" Feb 17 17:59:16 crc kubenswrapper[4680]: E0217 17:59:16.004244 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6mjsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5d946d989d-nbwj4_openstack-operators(7030ebc6-bc71-4388-97c3-14403919b886): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 17:59:16 crc kubenswrapper[4680]: E0217 17:59:16.005445 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nbwj4" podUID="7030ebc6-bc71-4388-97c3-14403919b886" Feb 17 17:59:16 crc kubenswrapper[4680]: E0217 17:59:16.452297 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nbwj4" podUID="7030ebc6-bc71-4388-97c3-14403919b886" Feb 17 17:59:17 crc kubenswrapper[4680]: E0217 17:59:17.657089 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 17 17:59:17 crc kubenswrapper[4680]: E0217 17:59:17.657623 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q285t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-7pstg_openstack-operators(5663f682-3b3a-4061-8d4a-377a103c0bd4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 17:59:17 crc kubenswrapper[4680]: E0217 17:59:17.658885 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7pstg" podUID="5663f682-3b3a-4061-8d4a-377a103c0bd4" Feb 17 17:59:18 crc kubenswrapper[4680]: E0217 17:59:18.490053 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7pstg" podUID="5663f682-3b3a-4061-8d4a-377a103c0bd4" Feb 17 17:59:18 crc kubenswrapper[4680]: E0217 17:59:18.676569 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" Feb 17 17:59:18 crc kubenswrapper[4680]: E0217 17:59:18.676764 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nfq5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-6628n_openstack-operators(41c69a9d-0d86-4088-9490-262b3eff9441): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 17:59:18 crc kubenswrapper[4680]: E0217 17:59:18.677956 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6628n" podUID="41c69a9d-0d86-4088-9490-262b3eff9441" Feb 17 17:59:19 crc kubenswrapper[4680]: E0217 17:59:19.496599 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6628n" podUID="41c69a9d-0d86-4088-9490-262b3eff9441" Feb 17 17:59:23 crc kubenswrapper[4680]: E0217 17:59:23.326912 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 17 17:59:23 crc kubenswrapper[4680]: E0217 17:59:23.327455 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-46phb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-fnnbk_openstack-operators(25257448-9b91-4b64-9279-e3e51e752019): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 17:59:23 crc kubenswrapper[4680]: E0217 17:59:23.328643 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fnnbk" podUID="25257448-9b91-4b64-9279-e3e51e752019" Feb 17 17:59:23 crc kubenswrapper[4680]: E0217 17:59:23.522165 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fnnbk" podUID="25257448-9b91-4b64-9279-e3e51e752019" Feb 17 17:59:23 crc kubenswrapper[4680]: E0217 17:59:23.911393 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" Feb 17 17:59:23 crc kubenswrapper[4680]: E0217 17:59:23.911584 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q9t2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-t8p47_openstack-operators(827a72ee-4516-4fe2-866a-a87124fe7439): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 17:59:23 crc kubenswrapper[4680]: E0217 17:59:23.912926 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-t8p47" podUID="827a72ee-4516-4fe2-866a-a87124fe7439" Feb 17 17:59:24 crc kubenswrapper[4680]: E0217 17:59:24.378625 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 17 17:59:24 crc kubenswrapper[4680]: E0217 17:59:24.379031 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fn2vm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-6k4kw_openstack-operators(7e8462ff-c9e3-4cff-bd4b-7f94d8acb8b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 17:59:24 crc kubenswrapper[4680]: E0217 17:59:24.380225 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6k4kw" podUID="7e8462ff-c9e3-4cff-bd4b-7f94d8acb8b1" Feb 17 17:59:24 crc kubenswrapper[4680]: E0217 17:59:24.530036 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6k4kw" podUID="7e8462ff-c9e3-4cff-bd4b-7f94d8acb8b1" Feb 17 17:59:24 crc kubenswrapper[4680]: E0217 17:59:24.533276 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-t8p47" podUID="827a72ee-4516-4fe2-866a-a87124fe7439" Feb 17 17:59:25 crc kubenswrapper[4680]: E0217 17:59:25.492007 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 17 17:59:25 crc kubenswrapper[4680]: E0217 17:59:25.492531 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dqw48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-xqwqr_openstack-operators(09667f14-d7ef-4502-9880-b0f14e2247b8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 17:59:25 crc kubenswrapper[4680]: E0217 17:59:25.493639 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xqwqr" podUID="09667f14-d7ef-4502-9880-b0f14e2247b8" Feb 17 17:59:25 crc kubenswrapper[4680]: E0217 17:59:25.536008 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xqwqr" podUID="09667f14-d7ef-4502-9880-b0f14e2247b8" Feb 17 17:59:29 crc kubenswrapper[4680]: I0217 17:59:29.807160 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert\") pod \"infra-operator-controller-manager-ff5c8777-ft44h\" (UID: \"a61ed4b7-335a-429e-8c5a-17b2d322c78e\") " pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" Feb 17 17:59:29 crc kubenswrapper[4680]: I0217 17:59:29.816502 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a61ed4b7-335a-429e-8c5a-17b2d322c78e-cert\") pod \"infra-operator-controller-manager-ff5c8777-ft44h\" (UID: \"a61ed4b7-335a-429e-8c5a-17b2d322c78e\") " pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" Feb 17 17:59:29 crc kubenswrapper[4680]: I0217 17:59:29.950263 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" Feb 17 17:59:30 crc kubenswrapper[4680]: I0217 17:59:30.821177 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:59:30 crc kubenswrapper[4680]: I0217 17:59:30.836026 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1982baea-3b82-4376-ab68-6559095032d5-webhook-certs\") pod \"openstack-operator-controller-manager-b96b9dfc9-r9bnq\" (UID: \"1982baea-3b82-4376-ab68-6559095032d5\") " pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:59:30 crc kubenswrapper[4680]: I0217 17:59:30.920565 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:59:31 crc kubenswrapper[4680]: E0217 17:59:31.846927 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 17 17:59:31 crc kubenswrapper[4680]: E0217 17:59:31.847423 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7fc4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-jpnmh_openstack-operators(73c67b77-ecaf-4925-aba5-855dc368e729): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 17:59:31 crc kubenswrapper[4680]: E0217 17:59:31.848945 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jpnmh" podUID="73c67b77-ecaf-4925-aba5-855dc368e729" Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.323494 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b7rbb"] Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.458593 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zxbtl"] Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.495907 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv"] Feb 17 17:59:32 crc kubenswrapper[4680]: W0217 17:59:32.553116 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a909135_a1d3_4a69_875d_74f19de59cf1.slice/crio-8d8f4921767449280edb54efa45d2ae791862607e4f5432bcdd5ca92194e9515 WatchSource:0}: Error finding container 8d8f4921767449280edb54efa45d2ae791862607e4f5432bcdd5ca92194e9515: Status 404 returned error can't find the container with id 8d8f4921767449280edb54efa45d2ae791862607e4f5432bcdd5ca92194e9515 Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.590838 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-n45mv" event={"ID":"44afd1e8-ecdc-4854-bafa-d2bdabe26c5b","Type":"ContainerStarted","Data":"9c56be285b4b09b68499073c6523d59f07777f5faa8e68bae8a444c8ad350516"} Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.590920 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-n45mv" Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.598899 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-zq66h" event={"ID":"3a32bda7-58e5-494b-af42-d255934537ac","Type":"ContainerStarted","Data":"071701df51249d9a2ec5413810b89f5a32f53d79fd69745a8022e3f54f2a5b60"} Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.599390 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-zq66h" Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.613680 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-n45mv" podStartSLOduration=9.321396513 podStartE2EDuration="35.613662328s" podCreationTimestamp="2026-02-17 17:58:57 +0000 UTC" firstStartedPulling="2026-02-17 17:59:00.03181641 +0000 UTC m=+869.582715089" lastFinishedPulling="2026-02-17 17:59:26.324082225 +0000 UTC m=+895.874980904" observedRunningTime="2026-02-17 17:59:32.608911097 +0000 UTC m=+902.159809776" watchObservedRunningTime="2026-02-17 17:59:32.613662328 +0000 UTC m=+902.164561007" Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.641850 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h"] Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.645059 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-zq66h" podStartSLOduration=10.145479909 podStartE2EDuration="35.64503749s" podCreationTimestamp="2026-02-17 17:58:57 +0000 UTC" firstStartedPulling="2026-02-17 17:58:59.974541807 +0000 UTC m=+869.525440486" lastFinishedPulling="2026-02-17 17:59:25.474099398 +0000 UTC m=+895.024998067" observedRunningTime="2026-02-17 17:59:32.628536948 +0000 UTC m=+902.179435627" watchObservedRunningTime="2026-02-17 17:59:32.64503749 +0000 UTC m=+902.195936169" Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.645944 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7rbb" event={"ID":"917e64a2-e2d5-4871-8774-82861f1ac0fe","Type":"ContainerStarted","Data":"1fff211c91ca6cce81a51654639b78fd6143b6c8b02553388244d4bf4dd6fccf"} Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.668281 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" event={"ID":"3a909135-a1d3-4a69-875d-74f19de59cf1","Type":"ContainerStarted","Data":"8d8f4921767449280edb54efa45d2ae791862607e4f5432bcdd5ca92194e9515"} Feb 17 17:59:32 crc kubenswrapper[4680]: W0217 17:59:32.678972 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda61ed4b7_335a_429e_8c5a_17b2d322c78e.slice/crio-13a975654f39cbd70eda79ac3bee364c61a7b83f197d1c8b37cc94d3af1f04a1 WatchSource:0}: Error finding container 13a975654f39cbd70eda79ac3bee364c61a7b83f197d1c8b37cc94d3af1f04a1: Status 404 returned error can't find the container with id 13a975654f39cbd70eda79ac3bee364c61a7b83f197d1c8b37cc94d3af1f04a1 Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.681915 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-htc8b" event={"ID":"36e854c8-4742-4138-8766-6614fa005bae","Type":"ContainerStarted","Data":"ba3497b555ac5416d759499da7d632903d0316304f1a9927b58607c57e0ca97a"} Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.683074 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-htc8b" Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.692735 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxbtl" event={"ID":"81f62af7-df09-4294-8d1c-d023b33c3384","Type":"ContainerStarted","Data":"f9494ca4a0511a1ccfb091deaa71f88bbcf275b6e8a25b9fba5487d27ad4fd55"} Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.724537 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-htc8b" podStartSLOduration=8.820203545 podStartE2EDuration="34.724514535s" podCreationTimestamp="2026-02-17 17:58:58 +0000 UTC" firstStartedPulling="2026-02-17 17:59:00.419195727 +0000 UTC m=+869.970094406" lastFinishedPulling="2026-02-17 17:59:26.323506717 +0000 UTC m=+895.874405396" observedRunningTime="2026-02-17 17:59:32.710716849 +0000 UTC m=+902.261615528" watchObservedRunningTime="2026-02-17 17:59:32.724514535 +0000 UTC m=+902.275413214" Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.725749 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqdqj" event={"ID":"dd0b1acf-7e52-47f1-9fa3-c82be1b7ce8f","Type":"ContainerStarted","Data":"8a4345cd49fac954833b623a1a0491f3a1354a4200e8af52a105257413873dec"} Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.725911 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqdqj" Feb 17 17:59:32 crc kubenswrapper[4680]: E0217 17:59:32.730238 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jpnmh" podUID="73c67b77-ecaf-4925-aba5-855dc368e729" Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.758829 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq"] Feb 17 17:59:32 crc kubenswrapper[4680]: I0217 17:59:32.796886 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqdqj" podStartSLOduration=10.345529839 podStartE2EDuration="35.796868025s" podCreationTimestamp="2026-02-17 17:58:57 +0000 UTC" firstStartedPulling="2026-02-17 17:59:00.022735412 +0000 UTC m=+869.573634091" lastFinishedPulling="2026-02-17 17:59:25.474073598 +0000 UTC m=+895.024972277" observedRunningTime="2026-02-17 17:59:32.787504119 +0000 UTC m=+902.338402798" watchObservedRunningTime="2026-02-17 17:59:32.796868025 +0000 UTC m=+902.347766694" Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.750016 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wpzsn" event={"ID":"85f8c95f-eecb-465c-be1f-c311edcbe597","Type":"ContainerStarted","Data":"7387f1e7bbf206f0cd65d43572ce382242b8de07ff7462e404af73d1dab2dd1f"} Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.750902 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wpzsn" Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.756514 4680 generic.go:334] "Generic (PLEG): container finished" podID="81f62af7-df09-4294-8d1c-d023b33c3384" containerID="a1e37836ef0c4d4ef808e744babaf14ae2ca8dba3983ca9bfc5062468cd72bda" exitCode=0 Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.756625 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxbtl" event={"ID":"81f62af7-df09-4294-8d1c-d023b33c3384","Type":"ContainerDied","Data":"a1e37836ef0c4d4ef808e744babaf14ae2ca8dba3983ca9bfc5062468cd72bda"} Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.763790 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-vgwfn" event={"ID":"6f60616b-3009-4961-af69-11a3d3e42f4b","Type":"ContainerStarted","Data":"d1733e16a645dfed927d8a807be1cd9a47baacb4f24fdcbac4624e0d724b1135"} Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.764521 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-vgwfn" Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.769475 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" event={"ID":"1982baea-3b82-4376-ab68-6559095032d5","Type":"ContainerStarted","Data":"8de3e1bea4aab660d2e8328dec66f1bc185e5af0412635ae82a79692ccb06d5c"} Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.769548 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" event={"ID":"1982baea-3b82-4376-ab68-6559095032d5","Type":"ContainerStarted","Data":"d6b33296336f78dbea84bc64fcfe878b5844f45dd5dec6fe819e20af22de8d7e"} Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.769676 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.778423 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" event={"ID":"a61ed4b7-335a-429e-8c5a-17b2d322c78e","Type":"ContainerStarted","Data":"13a975654f39cbd70eda79ac3bee364c61a7b83f197d1c8b37cc94d3af1f04a1"} Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.782650 4680 generic.go:334] "Generic (PLEG): container finished" podID="917e64a2-e2d5-4871-8774-82861f1ac0fe" containerID="e16697aab6fa7c45ab2120b086b5f6a26cd66b517c3bb50a4446e30dad6d8131" exitCode=0 Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.782705 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7rbb" event={"ID":"917e64a2-e2d5-4871-8774-82861f1ac0fe","Type":"ContainerDied","Data":"e16697aab6fa7c45ab2120b086b5f6a26cd66b517c3bb50a4446e30dad6d8131"} Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.798791 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-9qzbv" event={"ID":"c4d30f2e-a54a-43ef-923f-103f4e0c54d5","Type":"ContainerStarted","Data":"0696595767bbae311c0f4d73360115b2d1ee00b301a99a0b597292f823b1477d"} Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.799253 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-9qzbv" Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.809412 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nbwj4" event={"ID":"7030ebc6-bc71-4388-97c3-14403919b886","Type":"ContainerStarted","Data":"7ce447457f30b518cc7282864d4c31c01ab43c19a6320d733c249f8bb0851313"} Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.809623 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nbwj4" Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.812268 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7pstg" event={"ID":"5663f682-3b3a-4061-8d4a-377a103c0bd4","Type":"ContainerStarted","Data":"1a2f46a4988cf8dc46f3d2a64ec4d59031874d7108a01e3c8a072f60d73d4489"} Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.812858 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7pstg" Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.825380 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6628n" event={"ID":"41c69a9d-0d86-4088-9490-262b3eff9441","Type":"ContainerStarted","Data":"aee31929dd28d13f8220632126ed3aa73e28f389c2b21ca1eb5a0d361c825c9d"} Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.825976 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6628n" Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.841933 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wpzsn" podStartSLOduration=4.21969262 podStartE2EDuration="36.841912605s" podCreationTimestamp="2026-02-17 17:58:57 +0000 UTC" firstStartedPulling="2026-02-17 17:58:59.510916286 +0000 UTC m=+869.061814965" lastFinishedPulling="2026-02-17 17:59:32.133136271 +0000 UTC m=+901.684034950" observedRunningTime="2026-02-17 17:59:33.83574876 +0000 UTC m=+903.386647449" watchObservedRunningTime="2026-02-17 17:59:33.841912605 +0000 UTC m=+903.392811284" Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.854093 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-th6kc" event={"ID":"27bd44a3-f6bc-462d-92a4-53287bd71b8f","Type":"ContainerStarted","Data":"78e301e854614cde2e4c07a96cdbee760b6ab21fa000cffc95b2e3b88d232188"} Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.854841 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-th6kc" Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.867033 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-d5n5g" event={"ID":"41d9cacc-1317-479d-99ff-8eb7e6f3d6b6","Type":"ContainerStarted","Data":"0ae477fa317dc245cf48a95cfa68c07cfc4738912ef71d5a68e9f661c0f27236"} Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.867687 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-d5n5g" Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.874823 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bxxqw" event={"ID":"01a45563-cabc-4dbc-80cb-49b274ab96cd","Type":"ContainerStarted","Data":"062fd07eb0c06f8ad710b1701ffe35f1a7bfaf3b3e8690474ca784ce99a7953e"} Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.875417 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bxxqw" Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.876777 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rw6wk" event={"ID":"eeb4f8c1-4744-4e7f-a8a4-a2deb95f4b45","Type":"ContainerStarted","Data":"f2b1a2bb71020d1f75f34bdf09b62516aee4be4a13fea1b543b1594e2e57e5f9"} Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.877143 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rw6wk" Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.885355 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64dzx" event={"ID":"5e432327-20d1-4688-8dda-173b62e27110","Type":"ContainerStarted","Data":"667fc63659f63e7d6aa512f934b51cd86bb7b52f9a85021d8d8e145e2ccc7c6a"} Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.885665 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64dzx" Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.912835 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7pstg" podStartSLOduration=5.247172823 podStartE2EDuration="36.912818979s" podCreationTimestamp="2026-02-17 17:58:57 +0000 UTC" firstStartedPulling="2026-02-17 17:59:00.410435921 +0000 UTC m=+869.961334590" lastFinishedPulling="2026-02-17 17:59:32.076082067 +0000 UTC m=+901.626980746" observedRunningTime="2026-02-17 17:59:33.909621828 +0000 UTC m=+903.460520507" watchObservedRunningTime="2026-02-17 17:59:33.912818979 +0000 UTC m=+903.463717658" Feb 17 17:59:33 crc kubenswrapper[4680]: I0217 17:59:33.974661 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" podStartSLOduration=35.974631335 podStartE2EDuration="35.974631335s" podCreationTimestamp="2026-02-17 17:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:59:33.964252396 +0000 UTC m=+903.515151075" watchObservedRunningTime="2026-02-17 17:59:33.974631335 +0000 UTC m=+903.525530014" Feb 17 17:59:34 crc kubenswrapper[4680]: I0217 17:59:34.097252 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-9qzbv" podStartSLOduration=10.655929983 podStartE2EDuration="37.097234405s" podCreationTimestamp="2026-02-17 17:58:57 +0000 UTC" firstStartedPulling="2026-02-17 17:58:59.032823137 +0000 UTC m=+868.583721816" lastFinishedPulling="2026-02-17 17:59:25.474127559 +0000 UTC m=+895.025026238" observedRunningTime="2026-02-17 17:59:34.092002639 +0000 UTC m=+903.642901318" watchObservedRunningTime="2026-02-17 17:59:34.097234405 +0000 UTC m=+903.648133084" Feb 17 17:59:34 crc kubenswrapper[4680]: I0217 17:59:34.097838 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6628n" podStartSLOduration=3.785085421 podStartE2EDuration="36.097832744s" podCreationTimestamp="2026-02-17 17:58:58 +0000 UTC" firstStartedPulling="2026-02-17 17:59:00.432239031 +0000 UTC m=+869.983137710" lastFinishedPulling="2026-02-17 17:59:32.744986354 +0000 UTC m=+902.295885033" observedRunningTime="2026-02-17 17:59:34.017655666 +0000 UTC m=+903.568554345" watchObservedRunningTime="2026-02-17 17:59:34.097832744 +0000 UTC m=+903.648731413" Feb 17 17:59:34 crc kubenswrapper[4680]: I0217 17:59:34.172198 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nbwj4" podStartSLOduration=4.539996386 podStartE2EDuration="37.172178466s" podCreationTimestamp="2026-02-17 17:58:57 +0000 UTC" firstStartedPulling="2026-02-17 17:58:59.503406039 +0000 UTC m=+869.054304718" lastFinishedPulling="2026-02-17 17:59:32.135588119 +0000 UTC m=+901.686486798" observedRunningTime="2026-02-17 17:59:34.16944752 +0000 UTC m=+903.720346209" watchObservedRunningTime="2026-02-17 17:59:34.172178466 +0000 UTC m=+903.723077135" Feb 17 17:59:34 crc kubenswrapper[4680]: I0217 17:59:34.193162 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-vgwfn" podStartSLOduration=4.617221902 podStartE2EDuration="36.193142399s" podCreationTimestamp="2026-02-17 17:58:58 +0000 UTC" firstStartedPulling="2026-02-17 17:59:00.447065709 +0000 UTC m=+869.997964388" lastFinishedPulling="2026-02-17 17:59:32.022986206 +0000 UTC m=+901.573884885" observedRunningTime="2026-02-17 17:59:34.192545561 +0000 UTC m=+903.743444240" watchObservedRunningTime="2026-02-17 17:59:34.193142399 +0000 UTC m=+903.744041078" Feb 17 17:59:34 crc kubenswrapper[4680]: I0217 17:59:34.250651 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rw6wk" podStartSLOduration=4.660302367 podStartE2EDuration="36.250630029s" podCreationTimestamp="2026-02-17 17:58:58 +0000 UTC" firstStartedPulling="2026-02-17 17:59:00.434201773 +0000 UTC m=+869.985100452" lastFinishedPulling="2026-02-17 17:59:32.024529435 +0000 UTC m=+901.575428114" observedRunningTime="2026-02-17 17:59:34.24559896 +0000 UTC m=+903.796497639" watchObservedRunningTime="2026-02-17 17:59:34.250630029 +0000 UTC m=+903.801528708" Feb 17 17:59:34 crc kubenswrapper[4680]: I0217 17:59:34.278434 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-d5n5g" podStartSLOduration=4.702978647 podStartE2EDuration="36.278418748s" podCreationTimestamp="2026-02-17 17:58:58 +0000 UTC" firstStartedPulling="2026-02-17 17:59:00.44643522 +0000 UTC m=+869.997333899" lastFinishedPulling="2026-02-17 17:59:32.021875321 +0000 UTC m=+901.572774000" observedRunningTime="2026-02-17 17:59:34.275809346 +0000 UTC m=+903.826708055" watchObservedRunningTime="2026-02-17 17:59:34.278418748 +0000 UTC m=+903.829317427" Feb 17 17:59:34 crc kubenswrapper[4680]: I0217 17:59:34.306704 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-th6kc" podStartSLOduration=5.726376497 podStartE2EDuration="37.306685373s" podCreationTimestamp="2026-02-17 17:58:57 +0000 UTC" firstStartedPulling="2026-02-17 17:59:00.434060828 +0000 UTC m=+869.984959507" lastFinishedPulling="2026-02-17 17:59:32.014369714 +0000 UTC m=+901.565268383" observedRunningTime="2026-02-17 17:59:34.303509802 +0000 UTC m=+903.854408481" watchObservedRunningTime="2026-02-17 17:59:34.306685373 +0000 UTC m=+903.857584052" Feb 17 17:59:34 crc kubenswrapper[4680]: I0217 17:59:34.371678 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bxxqw" podStartSLOduration=5.789621799 podStartE2EDuration="37.371660779s" podCreationTimestamp="2026-02-17 17:58:57 +0000 UTC" firstStartedPulling="2026-02-17 17:59:00.433868902 +0000 UTC m=+869.984767581" lastFinishedPulling="2026-02-17 17:59:32.015907882 +0000 UTC m=+901.566806561" observedRunningTime="2026-02-17 17:59:34.333524712 +0000 UTC m=+903.884423391" watchObservedRunningTime="2026-02-17 17:59:34.371660779 +0000 UTC m=+903.922559458" Feb 17 17:59:35 crc kubenswrapper[4680]: I0217 17:59:35.588486 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:59:35 crc kubenswrapper[4680]: I0217 17:59:35.588719 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:59:35 crc kubenswrapper[4680]: I0217 17:59:35.919600 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7rbb" event={"ID":"917e64a2-e2d5-4871-8774-82861f1ac0fe","Type":"ContainerStarted","Data":"7f0cc1259695a805b6740fa99e34e7d373d9c970900772873b781c53e1ca18b0"} Feb 17 17:59:35 crc kubenswrapper[4680]: I0217 17:59:35.944303 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxbtl" event={"ID":"81f62af7-df09-4294-8d1c-d023b33c3384","Type":"ContainerStarted","Data":"c8570265cb3edd90eea4a4d3becd786ce2ab4b86ee074ae3c077902d7c4ce242"} Feb 17 17:59:35 crc kubenswrapper[4680]: I0217 17:59:35.951958 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64dzx" podStartSLOduration=6.379528326 podStartE2EDuration="38.951940245s" podCreationTimestamp="2026-02-17 17:58:57 +0000 UTC" firstStartedPulling="2026-02-17 17:58:59.503185662 +0000 UTC m=+869.054084341" lastFinishedPulling="2026-02-17 17:59:32.075597581 +0000 UTC m=+901.626496260" observedRunningTime="2026-02-17 17:59:34.379555739 +0000 UTC m=+903.930454418" watchObservedRunningTime="2026-02-17 17:59:35.951940245 +0000 UTC m=+905.502838944" Feb 17 17:59:36 crc kubenswrapper[4680]: I0217 17:59:36.961444 4680 generic.go:334] "Generic (PLEG): container finished" podID="81f62af7-df09-4294-8d1c-d023b33c3384" containerID="c8570265cb3edd90eea4a4d3becd786ce2ab4b86ee074ae3c077902d7c4ce242" exitCode=0 Feb 17 17:59:36 crc kubenswrapper[4680]: I0217 17:59:36.961483 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxbtl" event={"ID":"81f62af7-df09-4294-8d1c-d023b33c3384","Type":"ContainerDied","Data":"c8570265cb3edd90eea4a4d3becd786ce2ab4b86ee074ae3c077902d7c4ce242"} Feb 17 17:59:37 crc kubenswrapper[4680]: I0217 17:59:37.970651 4680 generic.go:334] "Generic (PLEG): container finished" podID="917e64a2-e2d5-4871-8774-82861f1ac0fe" containerID="7f0cc1259695a805b6740fa99e34e7d373d9c970900772873b781c53e1ca18b0" exitCode=0 Feb 17 17:59:37 crc kubenswrapper[4680]: I0217 17:59:37.970693 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7rbb" event={"ID":"917e64a2-e2d5-4871-8774-82861f1ac0fe","Type":"ContainerDied","Data":"7f0cc1259695a805b6740fa99e34e7d373d9c970900772873b781c53e1ca18b0"} Feb 17 17:59:38 crc kubenswrapper[4680]: I0217 17:59:38.019229 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-zq66h" Feb 17 17:59:38 crc kubenswrapper[4680]: I0217 17:59:38.070583 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-9qzbv" Feb 17 17:59:38 crc kubenswrapper[4680]: I0217 17:59:38.129773 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-wpzsn" Feb 17 17:59:38 crc kubenswrapper[4680]: I0217 17:59:38.344271 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-th6kc" Feb 17 17:59:38 crc kubenswrapper[4680]: I0217 17:59:38.393758 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqdqj" Feb 17 17:59:38 crc kubenswrapper[4680]: I0217 17:59:38.403884 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bxxqw" Feb 17 17:59:38 crc kubenswrapper[4680]: I0217 17:59:38.488701 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-n45mv" Feb 17 17:59:38 crc kubenswrapper[4680]: I0217 17:59:38.612779 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7pstg" Feb 17 17:59:38 crc kubenswrapper[4680]: I0217 17:59:38.901525 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-d5n5g" Feb 17 17:59:38 crc kubenswrapper[4680]: I0217 17:59:38.912689 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rw6wk" Feb 17 17:59:39 crc kubenswrapper[4680]: I0217 17:59:39.054760 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-vgwfn" Feb 17 17:59:39 crc kubenswrapper[4680]: I0217 17:59:39.225283 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-6628n" Feb 17 17:59:39 crc kubenswrapper[4680]: I0217 17:59:39.239280 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-htc8b" Feb 17 17:59:40 crc kubenswrapper[4680]: I0217 17:59:40.927026 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-b96b9dfc9-r9bnq" Feb 17 17:59:42 crc kubenswrapper[4680]: I0217 17:59:42.006987 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xqwqr" event={"ID":"09667f14-d7ef-4502-9880-b0f14e2247b8","Type":"ContainerStarted","Data":"ddd89881877c78d65499625bffe4c587b29e2fe5d5bc5d3f142eb62aaecc3db5"} Feb 17 17:59:42 crc kubenswrapper[4680]: I0217 17:59:42.007774 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xqwqr" Feb 17 17:59:42 crc kubenswrapper[4680]: I0217 17:59:42.010279 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxbtl" event={"ID":"81f62af7-df09-4294-8d1c-d023b33c3384","Type":"ContainerStarted","Data":"c7bdfc0c4ab30292d1adc81a4280b8c4aec2876b752f084dd59ebd6589d7e43b"} Feb 17 17:59:42 crc kubenswrapper[4680]: I0217 17:59:42.011835 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fnnbk" event={"ID":"25257448-9b91-4b64-9279-e3e51e752019","Type":"ContainerStarted","Data":"0cda08c77bb7eb282b0c6766a78e3004e69fed7d8679b1e1573d3e4a1fb3ee07"} Feb 17 17:59:42 crc kubenswrapper[4680]: I0217 17:59:42.012038 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fnnbk" Feb 17 17:59:42 crc kubenswrapper[4680]: I0217 17:59:42.042688 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-t8p47" event={"ID":"827a72ee-4516-4fe2-866a-a87124fe7439","Type":"ContainerStarted","Data":"b59ef4ce66f4eebd69156fe1afe6a7541d61a5a195436f3466069035a85099a7"} Feb 17 17:59:42 crc kubenswrapper[4680]: I0217 17:59:42.043063 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-t8p47" Feb 17 17:59:42 crc kubenswrapper[4680]: I0217 17:59:42.049183 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" event={"ID":"3a909135-a1d3-4a69-875d-74f19de59cf1","Type":"ContainerStarted","Data":"ec4cf1bef157728b8ff92849fd9bdff752cde907da06259fabc46292db424395"} Feb 17 17:59:42 crc kubenswrapper[4680]: I0217 17:59:42.050108 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" Feb 17 17:59:42 crc kubenswrapper[4680]: I0217 17:59:42.055988 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fnnbk" podStartSLOduration=3.378784739 podStartE2EDuration="45.055970106s" podCreationTimestamp="2026-02-17 17:58:57 +0000 UTC" firstStartedPulling="2026-02-17 17:59:00.000793388 +0000 UTC m=+869.551692077" lastFinishedPulling="2026-02-17 17:59:41.677978775 +0000 UTC m=+911.228877444" observedRunningTime="2026-02-17 17:59:42.055086518 +0000 UTC m=+911.605985197" watchObservedRunningTime="2026-02-17 17:59:42.055970106 +0000 UTC m=+911.606868785" Feb 17 17:59:42 crc kubenswrapper[4680]: I0217 17:59:42.095390 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xqwqr" podStartSLOduration=3.408315739 podStartE2EDuration="45.095360522s" podCreationTimestamp="2026-02-17 17:58:57 +0000 UTC" firstStartedPulling="2026-02-17 17:59:00.092540771 +0000 UTC m=+869.643439450" lastFinishedPulling="2026-02-17 17:59:41.779585554 +0000 UTC m=+911.330484233" observedRunningTime="2026-02-17 17:59:42.032784408 +0000 UTC m=+911.583683097" watchObservedRunningTime="2026-02-17 17:59:42.095360522 +0000 UTC m=+911.646259201" Feb 17 17:59:42 crc kubenswrapper[4680]: I0217 17:59:42.105679 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zxbtl" podStartSLOduration=31.298112078 podStartE2EDuration="39.105660525s" podCreationTimestamp="2026-02-17 17:59:03 +0000 UTC" firstStartedPulling="2026-02-17 17:59:33.758214497 +0000 UTC m=+903.309113176" lastFinishedPulling="2026-02-17 17:59:41.565762954 +0000 UTC m=+911.116661623" observedRunningTime="2026-02-17 17:59:42.099795691 +0000 UTC m=+911.650694390" watchObservedRunningTime="2026-02-17 17:59:42.105660525 +0000 UTC m=+911.656559204" Feb 17 17:59:42 crc kubenswrapper[4680]: I0217 17:59:42.150232 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" podStartSLOduration=35.191578556 podStartE2EDuration="44.150215523s" podCreationTimestamp="2026-02-17 17:58:58 +0000 UTC" firstStartedPulling="2026-02-17 17:59:32.555389053 +0000 UTC m=+902.106287722" lastFinishedPulling="2026-02-17 17:59:41.51402601 +0000 UTC m=+911.064924689" observedRunningTime="2026-02-17 17:59:42.139645462 +0000 UTC m=+911.690544141" watchObservedRunningTime="2026-02-17 17:59:42.150215523 +0000 UTC m=+911.701114202" Feb 17 17:59:43 crc kubenswrapper[4680]: I0217 17:59:43.057186 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6k4kw" event={"ID":"7e8462ff-c9e3-4cff-bd4b-7f94d8acb8b1","Type":"ContainerStarted","Data":"fe15fe69c206d761a6f97745d3a2dd151e67875667021b607b671a6c7a174a05"} Feb 17 17:59:43 crc kubenswrapper[4680]: I0217 17:59:43.058441 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" event={"ID":"a61ed4b7-335a-429e-8c5a-17b2d322c78e","Type":"ContainerStarted","Data":"2229e4773ae628ea8941e99fc6a151db28878c986efc17bea64e981b21de3bc7"} Feb 17 17:59:43 crc kubenswrapper[4680]: I0217 17:59:43.058559 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" Feb 17 17:59:43 crc kubenswrapper[4680]: I0217 17:59:43.060218 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7rbb" event={"ID":"917e64a2-e2d5-4871-8774-82861f1ac0fe","Type":"ContainerStarted","Data":"687100b36d4d4b3c7f5a261614f74aca52e3b105c31afc3a510a99923e99c7a2"} Feb 17 17:59:43 crc kubenswrapper[4680]: I0217 17:59:43.077517 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-t8p47" podStartSLOduration=3.839845805 podStartE2EDuration="45.077495811s" podCreationTimestamp="2026-02-17 17:58:58 +0000 UTC" firstStartedPulling="2026-02-17 17:59:00.373614626 +0000 UTC m=+869.924513305" lastFinishedPulling="2026-02-17 17:59:41.611264632 +0000 UTC m=+911.162163311" observedRunningTime="2026-02-17 17:59:42.176080575 +0000 UTC m=+911.726979254" watchObservedRunningTime="2026-02-17 17:59:43.077495811 +0000 UTC m=+912.628394490" Feb 17 17:59:43 crc kubenswrapper[4680]: I0217 17:59:43.079547 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6k4kw" podStartSLOduration=3.5454837340000003 podStartE2EDuration="45.079535084s" podCreationTimestamp="2026-02-17 17:58:58 +0000 UTC" firstStartedPulling="2026-02-17 17:59:00.405406741 +0000 UTC m=+869.956305420" lastFinishedPulling="2026-02-17 17:59:41.939458091 +0000 UTC m=+911.490356770" observedRunningTime="2026-02-17 17:59:43.073736742 +0000 UTC m=+912.624635431" watchObservedRunningTime="2026-02-17 17:59:43.079535084 +0000 UTC m=+912.630433763" Feb 17 17:59:43 crc kubenswrapper[4680]: I0217 17:59:43.099163 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b7rbb" podStartSLOduration=26.781826324 podStartE2EDuration="35.09914308s" podCreationTimestamp="2026-02-17 17:59:08 +0000 UTC" firstStartedPulling="2026-02-17 17:59:33.787110031 +0000 UTC m=+903.338008710" lastFinishedPulling="2026-02-17 17:59:42.104426787 +0000 UTC m=+911.655325466" observedRunningTime="2026-02-17 17:59:43.096645591 +0000 UTC m=+912.647544280" watchObservedRunningTime="2026-02-17 17:59:43.09914308 +0000 UTC m=+912.650041759" Feb 17 17:59:43 crc kubenswrapper[4680]: I0217 17:59:43.118065 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" podStartSLOduration=36.903871052 podStartE2EDuration="46.118042313s" podCreationTimestamp="2026-02-17 17:58:57 +0000 UTC" firstStartedPulling="2026-02-17 17:59:32.68324676 +0000 UTC m=+902.234145439" lastFinishedPulling="2026-02-17 17:59:41.897418031 +0000 UTC m=+911.448316700" observedRunningTime="2026-02-17 17:59:43.112568221 +0000 UTC m=+912.663466900" watchObservedRunningTime="2026-02-17 17:59:43.118042313 +0000 UTC m=+912.668940992" Feb 17 17:59:44 crc kubenswrapper[4680]: I0217 17:59:44.014465 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zxbtl" Feb 17 17:59:44 crc kubenswrapper[4680]: I0217 17:59:44.014718 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zxbtl" Feb 17 17:59:45 crc kubenswrapper[4680]: I0217 17:59:45.052455 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zxbtl" podUID="81f62af7-df09-4294-8d1c-d023b33c3384" containerName="registry-server" probeResult="failure" output=< Feb 17 17:59:45 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 17:59:45 crc kubenswrapper[4680]: > Feb 17 17:59:47 crc kubenswrapper[4680]: I0217 17:59:47.967628 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64dzx" Feb 17 17:59:47 crc kubenswrapper[4680]: I0217 17:59:47.971990 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-nbwj4" Feb 17 17:59:48 crc kubenswrapper[4680]: I0217 17:59:48.092580 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jpnmh" event={"ID":"73c67b77-ecaf-4925-aba5-855dc368e729","Type":"ContainerStarted","Data":"8f86c42996570df99eef7181d2ee044e5a2d0b8e6ea19d3a817e5b2c9694dfbf"} Feb 17 17:59:48 crc kubenswrapper[4680]: I0217 17:59:48.093151 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jpnmh" Feb 17 17:59:48 crc kubenswrapper[4680]: I0217 17:59:48.110126 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jpnmh" podStartSLOduration=3.482655721 podStartE2EDuration="51.110105429s" podCreationTimestamp="2026-02-17 17:58:57 +0000 UTC" firstStartedPulling="2026-02-17 17:58:59.970571322 +0000 UTC m=+869.521470001" lastFinishedPulling="2026-02-17 17:59:47.59802103 +0000 UTC m=+917.148919709" observedRunningTime="2026-02-17 17:59:48.106733774 +0000 UTC m=+917.657632463" watchObservedRunningTime="2026-02-17 17:59:48.110105429 +0000 UTC m=+917.661004108" Feb 17 17:59:48 crc kubenswrapper[4680]: I0217 17:59:48.181171 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fnnbk" Feb 17 17:59:48 crc kubenswrapper[4680]: I0217 17:59:48.327983 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xqwqr" Feb 17 17:59:48 crc kubenswrapper[4680]: I0217 17:59:48.902144 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b7rbb" Feb 17 17:59:48 crc kubenswrapper[4680]: I0217 17:59:48.902203 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b7rbb" Feb 17 17:59:49 crc kubenswrapper[4680]: I0217 17:59:49.069427 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-t8p47" Feb 17 17:59:49 crc kubenswrapper[4680]: I0217 17:59:49.943815 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b7rbb" podUID="917e64a2-e2d5-4871-8774-82861f1ac0fe" containerName="registry-server" probeResult="failure" output=< Feb 17 17:59:49 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 17:59:49 crc kubenswrapper[4680]: > Feb 17 17:59:49 crc kubenswrapper[4680]: I0217 17:59:49.958601 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-ff5c8777-ft44h" Feb 17 17:59:54 crc kubenswrapper[4680]: I0217 17:59:54.058331 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zxbtl" Feb 17 17:59:54 crc kubenswrapper[4680]: I0217 17:59:54.100988 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zxbtl" Feb 17 17:59:54 crc kubenswrapper[4680]: I0217 17:59:54.292860 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zxbtl"] Feb 17 17:59:54 crc kubenswrapper[4680]: I0217 17:59:54.432683 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv" Feb 17 17:59:55 crc kubenswrapper[4680]: I0217 17:59:55.132428 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zxbtl" podUID="81f62af7-df09-4294-8d1c-d023b33c3384" containerName="registry-server" containerID="cri-o://c7bdfc0c4ab30292d1adc81a4280b8c4aec2876b752f084dd59ebd6589d7e43b" gracePeriod=2 Feb 17 17:59:55 crc kubenswrapper[4680]: I0217 17:59:55.499878 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxbtl" Feb 17 17:59:55 crc kubenswrapper[4680]: I0217 17:59:55.551415 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f62af7-df09-4294-8d1c-d023b33c3384-catalog-content\") pod \"81f62af7-df09-4294-8d1c-d023b33c3384\" (UID: \"81f62af7-df09-4294-8d1c-d023b33c3384\") " Feb 17 17:59:55 crc kubenswrapper[4680]: I0217 17:59:55.551460 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q58bw\" (UniqueName: \"kubernetes.io/projected/81f62af7-df09-4294-8d1c-d023b33c3384-kube-api-access-q58bw\") pod \"81f62af7-df09-4294-8d1c-d023b33c3384\" (UID: \"81f62af7-df09-4294-8d1c-d023b33c3384\") " Feb 17 17:59:55 crc kubenswrapper[4680]: I0217 17:59:55.551524 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f62af7-df09-4294-8d1c-d023b33c3384-utilities\") pod \"81f62af7-df09-4294-8d1c-d023b33c3384\" (UID: \"81f62af7-df09-4294-8d1c-d023b33c3384\") " Feb 17 17:59:55 crc kubenswrapper[4680]: I0217 17:59:55.552481 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81f62af7-df09-4294-8d1c-d023b33c3384-utilities" (OuterVolumeSpecName: "utilities") pod "81f62af7-df09-4294-8d1c-d023b33c3384" (UID: "81f62af7-df09-4294-8d1c-d023b33c3384"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:59:55 crc kubenswrapper[4680]: I0217 17:59:55.563112 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81f62af7-df09-4294-8d1c-d023b33c3384-kube-api-access-q58bw" (OuterVolumeSpecName: "kube-api-access-q58bw") pod "81f62af7-df09-4294-8d1c-d023b33c3384" (UID: "81f62af7-df09-4294-8d1c-d023b33c3384"). InnerVolumeSpecName "kube-api-access-q58bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:59:55 crc kubenswrapper[4680]: I0217 17:59:55.613643 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81f62af7-df09-4294-8d1c-d023b33c3384-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81f62af7-df09-4294-8d1c-d023b33c3384" (UID: "81f62af7-df09-4294-8d1c-d023b33c3384"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:59:55 crc kubenswrapper[4680]: I0217 17:59:55.652894 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f62af7-df09-4294-8d1c-d023b33c3384-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:59:55 crc kubenswrapper[4680]: I0217 17:59:55.652930 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f62af7-df09-4294-8d1c-d023b33c3384-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:59:55 crc kubenswrapper[4680]: I0217 17:59:55.652943 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q58bw\" (UniqueName: \"kubernetes.io/projected/81f62af7-df09-4294-8d1c-d023b33c3384-kube-api-access-q58bw\") on node \"crc\" DevicePath \"\"" Feb 17 17:59:56 crc kubenswrapper[4680]: I0217 17:59:56.141123 4680 generic.go:334] "Generic (PLEG): container finished" podID="81f62af7-df09-4294-8d1c-d023b33c3384" containerID="c7bdfc0c4ab30292d1adc81a4280b8c4aec2876b752f084dd59ebd6589d7e43b" exitCode=0 Feb 17 17:59:56 crc kubenswrapper[4680]: I0217 17:59:56.141163 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxbtl" event={"ID":"81f62af7-df09-4294-8d1c-d023b33c3384","Type":"ContainerDied","Data":"c7bdfc0c4ab30292d1adc81a4280b8c4aec2876b752f084dd59ebd6589d7e43b"} Feb 17 17:59:56 crc kubenswrapper[4680]: I0217 17:59:56.141189 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxbtl" event={"ID":"81f62af7-df09-4294-8d1c-d023b33c3384","Type":"ContainerDied","Data":"f9494ca4a0511a1ccfb091deaa71f88bbcf275b6e8a25b9fba5487d27ad4fd55"} Feb 17 17:59:56 crc kubenswrapper[4680]: I0217 17:59:56.141191 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxbtl" Feb 17 17:59:56 crc kubenswrapper[4680]: I0217 17:59:56.141208 4680 scope.go:117] "RemoveContainer" containerID="c7bdfc0c4ab30292d1adc81a4280b8c4aec2876b752f084dd59ebd6589d7e43b" Feb 17 17:59:56 crc kubenswrapper[4680]: I0217 17:59:56.159866 4680 scope.go:117] "RemoveContainer" containerID="c8570265cb3edd90eea4a4d3becd786ce2ab4b86ee074ae3c077902d7c4ce242" Feb 17 17:59:56 crc kubenswrapper[4680]: I0217 17:59:56.179530 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zxbtl"] Feb 17 17:59:56 crc kubenswrapper[4680]: I0217 17:59:56.189262 4680 scope.go:117] "RemoveContainer" containerID="a1e37836ef0c4d4ef808e744babaf14ae2ca8dba3983ca9bfc5062468cd72bda" Feb 17 17:59:56 crc kubenswrapper[4680]: I0217 17:59:56.189645 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zxbtl"] Feb 17 17:59:56 crc kubenswrapper[4680]: I0217 17:59:56.205466 4680 scope.go:117] "RemoveContainer" containerID="c7bdfc0c4ab30292d1adc81a4280b8c4aec2876b752f084dd59ebd6589d7e43b" Feb 17 17:59:56 crc kubenswrapper[4680]: E0217 17:59:56.206589 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7bdfc0c4ab30292d1adc81a4280b8c4aec2876b752f084dd59ebd6589d7e43b\": container with ID starting with c7bdfc0c4ab30292d1adc81a4280b8c4aec2876b752f084dd59ebd6589d7e43b not found: ID does not exist" containerID="c7bdfc0c4ab30292d1adc81a4280b8c4aec2876b752f084dd59ebd6589d7e43b" Feb 17 17:59:56 crc kubenswrapper[4680]: I0217 17:59:56.206629 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7bdfc0c4ab30292d1adc81a4280b8c4aec2876b752f084dd59ebd6589d7e43b"} err="failed to get container status \"c7bdfc0c4ab30292d1adc81a4280b8c4aec2876b752f084dd59ebd6589d7e43b\": rpc error: code = NotFound desc = could not find container \"c7bdfc0c4ab30292d1adc81a4280b8c4aec2876b752f084dd59ebd6589d7e43b\": container with ID starting with c7bdfc0c4ab30292d1adc81a4280b8c4aec2876b752f084dd59ebd6589d7e43b not found: ID does not exist" Feb 17 17:59:56 crc kubenswrapper[4680]: I0217 17:59:56.206656 4680 scope.go:117] "RemoveContainer" containerID="c8570265cb3edd90eea4a4d3becd786ce2ab4b86ee074ae3c077902d7c4ce242" Feb 17 17:59:56 crc kubenswrapper[4680]: E0217 17:59:56.207112 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8570265cb3edd90eea4a4d3becd786ce2ab4b86ee074ae3c077902d7c4ce242\": container with ID starting with c8570265cb3edd90eea4a4d3becd786ce2ab4b86ee074ae3c077902d7c4ce242 not found: ID does not exist" containerID="c8570265cb3edd90eea4a4d3becd786ce2ab4b86ee074ae3c077902d7c4ce242" Feb 17 17:59:56 crc kubenswrapper[4680]: I0217 17:59:56.207220 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8570265cb3edd90eea4a4d3becd786ce2ab4b86ee074ae3c077902d7c4ce242"} err="failed to get container status \"c8570265cb3edd90eea4a4d3becd786ce2ab4b86ee074ae3c077902d7c4ce242\": rpc error: code = NotFound desc = could not find container \"c8570265cb3edd90eea4a4d3becd786ce2ab4b86ee074ae3c077902d7c4ce242\": container with ID starting with c8570265cb3edd90eea4a4d3becd786ce2ab4b86ee074ae3c077902d7c4ce242 not found: ID does not exist" Feb 17 17:59:56 crc kubenswrapper[4680]: I0217 17:59:56.207308 4680 scope.go:117] "RemoveContainer" containerID="a1e37836ef0c4d4ef808e744babaf14ae2ca8dba3983ca9bfc5062468cd72bda" Feb 17 17:59:56 crc kubenswrapper[4680]: E0217 17:59:56.207740 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1e37836ef0c4d4ef808e744babaf14ae2ca8dba3983ca9bfc5062468cd72bda\": container with ID starting with a1e37836ef0c4d4ef808e744babaf14ae2ca8dba3983ca9bfc5062468cd72bda not found: ID does not exist" containerID="a1e37836ef0c4d4ef808e744babaf14ae2ca8dba3983ca9bfc5062468cd72bda" Feb 17 17:59:56 crc kubenswrapper[4680]: I0217 17:59:56.207769 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1e37836ef0c4d4ef808e744babaf14ae2ca8dba3983ca9bfc5062468cd72bda"} err="failed to get container status \"a1e37836ef0c4d4ef808e744babaf14ae2ca8dba3983ca9bfc5062468cd72bda\": rpc error: code = NotFound desc = could not find container \"a1e37836ef0c4d4ef808e744babaf14ae2ca8dba3983ca9bfc5062468cd72bda\": container with ID starting with a1e37836ef0c4d4ef808e744babaf14ae2ca8dba3983ca9bfc5062468cd72bda not found: ID does not exist" Feb 17 17:59:57 crc kubenswrapper[4680]: I0217 17:59:57.112931 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81f62af7-df09-4294-8d1c-d023b33c3384" path="/var/lib/kubelet/pods/81f62af7-df09-4294-8d1c-d023b33c3384/volumes" Feb 17 17:59:58 crc kubenswrapper[4680]: I0217 17:59:58.550056 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jpnmh" Feb 17 17:59:58 crc kubenswrapper[4680]: I0217 17:59:58.942111 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b7rbb" Feb 17 17:59:58 crc kubenswrapper[4680]: I0217 17:59:58.984745 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b7rbb" Feb 17 17:59:59 crc kubenswrapper[4680]: I0217 17:59:59.691491 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b7rbb"] Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.169012 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b7rbb" podUID="917e64a2-e2d5-4871-8774-82861f1ac0fe" containerName="registry-server" containerID="cri-o://687100b36d4d4b3c7f5a261614f74aca52e3b105c31afc3a510a99923e99c7a2" gracePeriod=2 Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.175700 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8"] Feb 17 18:00:00 crc kubenswrapper[4680]: E0217 18:00:00.176008 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f62af7-df09-4294-8d1c-d023b33c3384" containerName="extract-content" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.176025 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f62af7-df09-4294-8d1c-d023b33c3384" containerName="extract-content" Feb 17 18:00:00 crc kubenswrapper[4680]: E0217 18:00:00.176047 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f62af7-df09-4294-8d1c-d023b33c3384" containerName="extract-utilities" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.176055 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f62af7-df09-4294-8d1c-d023b33c3384" containerName="extract-utilities" Feb 17 18:00:00 crc kubenswrapper[4680]: E0217 18:00:00.176068 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f62af7-df09-4294-8d1c-d023b33c3384" containerName="registry-server" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.176075 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f62af7-df09-4294-8d1c-d023b33c3384" containerName="registry-server" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.176295 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f62af7-df09-4294-8d1c-d023b33c3384" containerName="registry-server" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.176899 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.179558 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.179757 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.204152 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8"] Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.309000 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-config-volume\") pod \"collect-profiles-29522520-z6ls8\" (UID: \"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.309080 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-secret-volume\") pod \"collect-profiles-29522520-z6ls8\" (UID: \"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.309122 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8f78\" (UniqueName: \"kubernetes.io/projected/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-kube-api-access-z8f78\") pod \"collect-profiles-29522520-z6ls8\" (UID: \"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.411012 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8f78\" (UniqueName: \"kubernetes.io/projected/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-kube-api-access-z8f78\") pod \"collect-profiles-29522520-z6ls8\" (UID: \"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.411174 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-config-volume\") pod \"collect-profiles-29522520-z6ls8\" (UID: \"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.411200 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-secret-volume\") pod \"collect-profiles-29522520-z6ls8\" (UID: \"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.412580 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-config-volume\") pod \"collect-profiles-29522520-z6ls8\" (UID: \"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.432998 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-secret-volume\") pod \"collect-profiles-29522520-z6ls8\" (UID: \"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.438992 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8f78\" (UniqueName: \"kubernetes.io/projected/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-kube-api-access-z8f78\") pod \"collect-profiles-29522520-z6ls8\" (UID: \"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.494745 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.873622 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7rbb" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.918525 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/917e64a2-e2d5-4871-8774-82861f1ac0fe-catalog-content\") pod \"917e64a2-e2d5-4871-8774-82861f1ac0fe\" (UID: \"917e64a2-e2d5-4871-8774-82861f1ac0fe\") " Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.918869 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/917e64a2-e2d5-4871-8774-82861f1ac0fe-utilities\") pod \"917e64a2-e2d5-4871-8774-82861f1ac0fe\" (UID: \"917e64a2-e2d5-4871-8774-82861f1ac0fe\") " Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.918934 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg5x7\" (UniqueName: \"kubernetes.io/projected/917e64a2-e2d5-4871-8774-82861f1ac0fe-kube-api-access-zg5x7\") pod \"917e64a2-e2d5-4871-8774-82861f1ac0fe\" (UID: \"917e64a2-e2d5-4871-8774-82861f1ac0fe\") " Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.919860 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/917e64a2-e2d5-4871-8774-82861f1ac0fe-utilities" (OuterVolumeSpecName: "utilities") pod "917e64a2-e2d5-4871-8774-82861f1ac0fe" (UID: "917e64a2-e2d5-4871-8774-82861f1ac0fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:00:00 crc kubenswrapper[4680]: I0217 18:00:00.926522 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/917e64a2-e2d5-4871-8774-82861f1ac0fe-kube-api-access-zg5x7" (OuterVolumeSpecName: "kube-api-access-zg5x7") pod "917e64a2-e2d5-4871-8774-82861f1ac0fe" (UID: "917e64a2-e2d5-4871-8774-82861f1ac0fe"). InnerVolumeSpecName "kube-api-access-zg5x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.020270 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/917e64a2-e2d5-4871-8774-82861f1ac0fe-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.020555 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg5x7\" (UniqueName: \"kubernetes.io/projected/917e64a2-e2d5-4871-8774-82861f1ac0fe-kube-api-access-zg5x7\") on node \"crc\" DevicePath \"\"" Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.062620 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/917e64a2-e2d5-4871-8774-82861f1ac0fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "917e64a2-e2d5-4871-8774-82861f1ac0fe" (UID: "917e64a2-e2d5-4871-8774-82861f1ac0fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.082370 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8"] Feb 17 18:00:01 crc kubenswrapper[4680]: W0217 18:00:01.085636 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda30f4c08_ceb0_421a_b15d_9ab1dfcb21a5.slice/crio-b85c24ea4c51f9eec2a608ee024b67e9c04575352691fd56a1b8008e14409da5 WatchSource:0}: Error finding container b85c24ea4c51f9eec2a608ee024b67e9c04575352691fd56a1b8008e14409da5: Status 404 returned error can't find the container with id b85c24ea4c51f9eec2a608ee024b67e9c04575352691fd56a1b8008e14409da5 Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.121878 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/917e64a2-e2d5-4871-8774-82861f1ac0fe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.178715 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8" event={"ID":"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5","Type":"ContainerStarted","Data":"b85c24ea4c51f9eec2a608ee024b67e9c04575352691fd56a1b8008e14409da5"} Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.181705 4680 generic.go:334] "Generic (PLEG): container finished" podID="917e64a2-e2d5-4871-8774-82861f1ac0fe" containerID="687100b36d4d4b3c7f5a261614f74aca52e3b105c31afc3a510a99923e99c7a2" exitCode=0 Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.181751 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7rbb" event={"ID":"917e64a2-e2d5-4871-8774-82861f1ac0fe","Type":"ContainerDied","Data":"687100b36d4d4b3c7f5a261614f74aca52e3b105c31afc3a510a99923e99c7a2"} Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.181779 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7rbb" Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.181799 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7rbb" event={"ID":"917e64a2-e2d5-4871-8774-82861f1ac0fe","Type":"ContainerDied","Data":"1fff211c91ca6cce81a51654639b78fd6143b6c8b02553388244d4bf4dd6fccf"} Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.181823 4680 scope.go:117] "RemoveContainer" containerID="687100b36d4d4b3c7f5a261614f74aca52e3b105c31afc3a510a99923e99c7a2" Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.199687 4680 scope.go:117] "RemoveContainer" containerID="7f0cc1259695a805b6740fa99e34e7d373d9c970900772873b781c53e1ca18b0" Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.210963 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b7rbb"] Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.217444 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b7rbb"] Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.236572 4680 scope.go:117] "RemoveContainer" containerID="e16697aab6fa7c45ab2120b086b5f6a26cd66b517c3bb50a4446e30dad6d8131" Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.265070 4680 scope.go:117] "RemoveContainer" containerID="687100b36d4d4b3c7f5a261614f74aca52e3b105c31afc3a510a99923e99c7a2" Feb 17 18:00:01 crc kubenswrapper[4680]: E0217 18:00:01.265690 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"687100b36d4d4b3c7f5a261614f74aca52e3b105c31afc3a510a99923e99c7a2\": container with ID starting with 687100b36d4d4b3c7f5a261614f74aca52e3b105c31afc3a510a99923e99c7a2 not found: ID does not exist" containerID="687100b36d4d4b3c7f5a261614f74aca52e3b105c31afc3a510a99923e99c7a2" Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.265725 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"687100b36d4d4b3c7f5a261614f74aca52e3b105c31afc3a510a99923e99c7a2"} err="failed to get container status \"687100b36d4d4b3c7f5a261614f74aca52e3b105c31afc3a510a99923e99c7a2\": rpc error: code = NotFound desc = could not find container \"687100b36d4d4b3c7f5a261614f74aca52e3b105c31afc3a510a99923e99c7a2\": container with ID starting with 687100b36d4d4b3c7f5a261614f74aca52e3b105c31afc3a510a99923e99c7a2 not found: ID does not exist" Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.265776 4680 scope.go:117] "RemoveContainer" containerID="7f0cc1259695a805b6740fa99e34e7d373d9c970900772873b781c53e1ca18b0" Feb 17 18:00:01 crc kubenswrapper[4680]: E0217 18:00:01.266209 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f0cc1259695a805b6740fa99e34e7d373d9c970900772873b781c53e1ca18b0\": container with ID starting with 7f0cc1259695a805b6740fa99e34e7d373d9c970900772873b781c53e1ca18b0 not found: ID does not exist" containerID="7f0cc1259695a805b6740fa99e34e7d373d9c970900772873b781c53e1ca18b0" Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.266246 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f0cc1259695a805b6740fa99e34e7d373d9c970900772873b781c53e1ca18b0"} err="failed to get container status \"7f0cc1259695a805b6740fa99e34e7d373d9c970900772873b781c53e1ca18b0\": rpc error: code = NotFound desc = could not find container \"7f0cc1259695a805b6740fa99e34e7d373d9c970900772873b781c53e1ca18b0\": container with ID starting with 7f0cc1259695a805b6740fa99e34e7d373d9c970900772873b781c53e1ca18b0 not found: ID does not exist" Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.266261 4680 scope.go:117] "RemoveContainer" containerID="e16697aab6fa7c45ab2120b086b5f6a26cd66b517c3bb50a4446e30dad6d8131" Feb 17 18:00:01 crc kubenswrapper[4680]: E0217 18:00:01.266716 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e16697aab6fa7c45ab2120b086b5f6a26cd66b517c3bb50a4446e30dad6d8131\": container with ID starting with e16697aab6fa7c45ab2120b086b5f6a26cd66b517c3bb50a4446e30dad6d8131 not found: ID does not exist" containerID="e16697aab6fa7c45ab2120b086b5f6a26cd66b517c3bb50a4446e30dad6d8131" Feb 17 18:00:01 crc kubenswrapper[4680]: I0217 18:00:01.266773 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e16697aab6fa7c45ab2120b086b5f6a26cd66b517c3bb50a4446e30dad6d8131"} err="failed to get container status \"e16697aab6fa7c45ab2120b086b5f6a26cd66b517c3bb50a4446e30dad6d8131\": rpc error: code = NotFound desc = could not find container \"e16697aab6fa7c45ab2120b086b5f6a26cd66b517c3bb50a4446e30dad6d8131\": container with ID starting with e16697aab6fa7c45ab2120b086b5f6a26cd66b517c3bb50a4446e30dad6d8131 not found: ID does not exist" Feb 17 18:00:02 crc kubenswrapper[4680]: I0217 18:00:02.201303 4680 generic.go:334] "Generic (PLEG): container finished" podID="a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5" containerID="0ddbcda602c4a8d221f44b10a2413c97bbbe06e2ac84c5b269e3f31566713f4a" exitCode=0 Feb 17 18:00:02 crc kubenswrapper[4680]: I0217 18:00:02.201654 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8" event={"ID":"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5","Type":"ContainerDied","Data":"0ddbcda602c4a8d221f44b10a2413c97bbbe06e2ac84c5b269e3f31566713f4a"} Feb 17 18:00:03 crc kubenswrapper[4680]: I0217 18:00:03.125117 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="917e64a2-e2d5-4871-8774-82861f1ac0fe" path="/var/lib/kubelet/pods/917e64a2-e2d5-4871-8774-82861f1ac0fe/volumes" Feb 17 18:00:03 crc kubenswrapper[4680]: I0217 18:00:03.465608 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8" Feb 17 18:00:03 crc kubenswrapper[4680]: I0217 18:00:03.653380 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-secret-volume\") pod \"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5\" (UID: \"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5\") " Feb 17 18:00:03 crc kubenswrapper[4680]: I0217 18:00:03.653571 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-config-volume\") pod \"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5\" (UID: \"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5\") " Feb 17 18:00:03 crc kubenswrapper[4680]: I0217 18:00:03.653618 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8f78\" (UniqueName: \"kubernetes.io/projected/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-kube-api-access-z8f78\") pod \"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5\" (UID: \"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5\") " Feb 17 18:00:03 crc kubenswrapper[4680]: I0217 18:00:03.654626 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-config-volume" (OuterVolumeSpecName: "config-volume") pod "a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5" (UID: "a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:00:03 crc kubenswrapper[4680]: I0217 18:00:03.661477 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-kube-api-access-z8f78" (OuterVolumeSpecName: "kube-api-access-z8f78") pod "a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5" (UID: "a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5"). InnerVolumeSpecName "kube-api-access-z8f78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:00:03 crc kubenswrapper[4680]: I0217 18:00:03.664542 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5" (UID: "a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:00:03 crc kubenswrapper[4680]: I0217 18:00:03.755138 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 18:00:03 crc kubenswrapper[4680]: I0217 18:00:03.755176 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8f78\" (UniqueName: \"kubernetes.io/projected/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-kube-api-access-z8f78\") on node \"crc\" DevicePath \"\"" Feb 17 18:00:03 crc kubenswrapper[4680]: I0217 18:00:03.755188 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 18:00:04 crc kubenswrapper[4680]: I0217 18:00:04.217504 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8" event={"ID":"a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5","Type":"ContainerDied","Data":"b85c24ea4c51f9eec2a608ee024b67e9c04575352691fd56a1b8008e14409da5"} Feb 17 18:00:04 crc kubenswrapper[4680]: I0217 18:00:04.217542 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8" Feb 17 18:00:04 crc kubenswrapper[4680]: I0217 18:00:04.217552 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b85c24ea4c51f9eec2a608ee024b67e9c04575352691fd56a1b8008e14409da5" Feb 17 18:00:05 crc kubenswrapper[4680]: I0217 18:00:05.588420 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:00:05 crc kubenswrapper[4680]: I0217 18:00:05.588751 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:00:05 crc kubenswrapper[4680]: I0217 18:00:05.588799 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 18:00:05 crc kubenswrapper[4680]: I0217 18:00:05.589412 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44b711f4401b0e5c2c2810ad8495da08e3e147502d358d88f7938a52612ea1ab"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 18:00:05 crc kubenswrapper[4680]: I0217 18:00:05.589468 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://44b711f4401b0e5c2c2810ad8495da08e3e147502d358d88f7938a52612ea1ab" gracePeriod=600 Feb 17 18:00:06 crc kubenswrapper[4680]: I0217 18:00:06.240520 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="44b711f4401b0e5c2c2810ad8495da08e3e147502d358d88f7938a52612ea1ab" exitCode=0 Feb 17 18:00:06 crc kubenswrapper[4680]: I0217 18:00:06.240554 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"44b711f4401b0e5c2c2810ad8495da08e3e147502d358d88f7938a52612ea1ab"} Feb 17 18:00:06 crc kubenswrapper[4680]: I0217 18:00:06.240886 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"7648edfdb05be717c37ab1637f452b4f2bc8bd71444b3bbf2ede9558e7be04df"} Feb 17 18:00:06 crc kubenswrapper[4680]: I0217 18:00:06.240908 4680 scope.go:117] "RemoveContainer" containerID="b2c9e5cbe14222efd0ffdc11f960ea7f538fdce4e27dc297816e521875a655d0" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.029662 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h4f7r"] Feb 17 18:00:16 crc kubenswrapper[4680]: E0217 18:00:16.030446 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5" containerName="collect-profiles" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.030459 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5" containerName="collect-profiles" Feb 17 18:00:16 crc kubenswrapper[4680]: E0217 18:00:16.030472 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917e64a2-e2d5-4871-8774-82861f1ac0fe" containerName="extract-content" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.030477 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="917e64a2-e2d5-4871-8774-82861f1ac0fe" containerName="extract-content" Feb 17 18:00:16 crc kubenswrapper[4680]: E0217 18:00:16.030487 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917e64a2-e2d5-4871-8774-82861f1ac0fe" containerName="extract-utilities" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.030492 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="917e64a2-e2d5-4871-8774-82861f1ac0fe" containerName="extract-utilities" Feb 17 18:00:16 crc kubenswrapper[4680]: E0217 18:00:16.030503 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917e64a2-e2d5-4871-8774-82861f1ac0fe" containerName="registry-server" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.030509 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="917e64a2-e2d5-4871-8774-82861f1ac0fe" containerName="registry-server" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.030628 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="917e64a2-e2d5-4871-8774-82861f1ac0fe" containerName="registry-server" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.030647 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5" containerName="collect-profiles" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.031290 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-h4f7r" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.034794 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fmpkm" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.034825 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.035626 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.035675 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.054238 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h4f7r"] Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.114119 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fhwrg"] Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.115349 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fhwrg" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.116480 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49af63a1-f109-42f7-a96e-d1364799f6be-config\") pod \"dnsmasq-dns-675f4bcbfc-h4f7r\" (UID: \"49af63a1-f109-42f7-a96e-d1364799f6be\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h4f7r" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.116598 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vthv5\" (UniqueName: \"kubernetes.io/projected/c26964fe-e05b-4195-84f8-469d567de82f-kube-api-access-vthv5\") pod \"dnsmasq-dns-78dd6ddcc-fhwrg\" (UID: \"c26964fe-e05b-4195-84f8-469d567de82f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fhwrg" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.116653 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5fd8\" (UniqueName: \"kubernetes.io/projected/49af63a1-f109-42f7-a96e-d1364799f6be-kube-api-access-p5fd8\") pod \"dnsmasq-dns-675f4bcbfc-h4f7r\" (UID: \"49af63a1-f109-42f7-a96e-d1364799f6be\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h4f7r" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.116869 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c26964fe-e05b-4195-84f8-469d567de82f-config\") pod \"dnsmasq-dns-78dd6ddcc-fhwrg\" (UID: \"c26964fe-e05b-4195-84f8-469d567de82f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fhwrg" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.116937 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c26964fe-e05b-4195-84f8-469d567de82f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-fhwrg\" (UID: \"c26964fe-e05b-4195-84f8-469d567de82f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fhwrg" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.121456 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.128731 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fhwrg"] Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.217769 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c26964fe-e05b-4195-84f8-469d567de82f-config\") pod \"dnsmasq-dns-78dd6ddcc-fhwrg\" (UID: \"c26964fe-e05b-4195-84f8-469d567de82f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fhwrg" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.217839 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c26964fe-e05b-4195-84f8-469d567de82f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-fhwrg\" (UID: \"c26964fe-e05b-4195-84f8-469d567de82f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fhwrg" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.217874 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49af63a1-f109-42f7-a96e-d1364799f6be-config\") pod \"dnsmasq-dns-675f4bcbfc-h4f7r\" (UID: \"49af63a1-f109-42f7-a96e-d1364799f6be\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h4f7r" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.217918 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vthv5\" (UniqueName: \"kubernetes.io/projected/c26964fe-e05b-4195-84f8-469d567de82f-kube-api-access-vthv5\") pod \"dnsmasq-dns-78dd6ddcc-fhwrg\" (UID: \"c26964fe-e05b-4195-84f8-469d567de82f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fhwrg" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.217945 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5fd8\" (UniqueName: \"kubernetes.io/projected/49af63a1-f109-42f7-a96e-d1364799f6be-kube-api-access-p5fd8\") pod \"dnsmasq-dns-675f4bcbfc-h4f7r\" (UID: \"49af63a1-f109-42f7-a96e-d1364799f6be\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h4f7r" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.218852 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c26964fe-e05b-4195-84f8-469d567de82f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-fhwrg\" (UID: \"c26964fe-e05b-4195-84f8-469d567de82f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fhwrg" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.218894 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49af63a1-f109-42f7-a96e-d1364799f6be-config\") pod \"dnsmasq-dns-675f4bcbfc-h4f7r\" (UID: \"49af63a1-f109-42f7-a96e-d1364799f6be\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h4f7r" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.219420 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c26964fe-e05b-4195-84f8-469d567de82f-config\") pod \"dnsmasq-dns-78dd6ddcc-fhwrg\" (UID: \"c26964fe-e05b-4195-84f8-469d567de82f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fhwrg" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.236530 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vthv5\" (UniqueName: \"kubernetes.io/projected/c26964fe-e05b-4195-84f8-469d567de82f-kube-api-access-vthv5\") pod \"dnsmasq-dns-78dd6ddcc-fhwrg\" (UID: \"c26964fe-e05b-4195-84f8-469d567de82f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fhwrg" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.244112 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5fd8\" (UniqueName: \"kubernetes.io/projected/49af63a1-f109-42f7-a96e-d1364799f6be-kube-api-access-p5fd8\") pod \"dnsmasq-dns-675f4bcbfc-h4f7r\" (UID: \"49af63a1-f109-42f7-a96e-d1364799f6be\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h4f7r" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.347254 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-h4f7r" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.437990 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fhwrg" Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.703577 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fhwrg"] Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.713302 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 18:00:16 crc kubenswrapper[4680]: I0217 18:00:16.786117 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h4f7r"] Feb 17 18:00:16 crc kubenswrapper[4680]: W0217 18:00:16.796651 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49af63a1_f109_42f7_a96e_d1364799f6be.slice/crio-bc222d2ec7f3f42f11080a27c42acf58b0919decb8352def3d7af3d838f05b0a WatchSource:0}: Error finding container bc222d2ec7f3f42f11080a27c42acf58b0919decb8352def3d7af3d838f05b0a: Status 404 returned error can't find the container with id bc222d2ec7f3f42f11080a27c42acf58b0919decb8352def3d7af3d838f05b0a Feb 17 18:00:17 crc kubenswrapper[4680]: I0217 18:00:17.311937 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-h4f7r" event={"ID":"49af63a1-f109-42f7-a96e-d1364799f6be","Type":"ContainerStarted","Data":"bc222d2ec7f3f42f11080a27c42acf58b0919decb8352def3d7af3d838f05b0a"} Feb 17 18:00:17 crc kubenswrapper[4680]: I0217 18:00:17.313297 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-fhwrg" event={"ID":"c26964fe-e05b-4195-84f8-469d567de82f","Type":"ContainerStarted","Data":"4305f90957f93208ea421a281c29a93d51ff422627a9d6fcb351cf856891a0fc"} Feb 17 18:00:18 crc kubenswrapper[4680]: I0217 18:00:18.711232 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h4f7r"] Feb 17 18:00:18 crc kubenswrapper[4680]: I0217 18:00:18.744210 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-b8tp9"] Feb 17 18:00:18 crc kubenswrapper[4680]: I0217 18:00:18.745511 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-b8tp9" Feb 17 18:00:18 crc kubenswrapper[4680]: I0217 18:00:18.777405 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-b8tp9"] Feb 17 18:00:18 crc kubenswrapper[4680]: I0217 18:00:18.872126 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-config\") pod \"dnsmasq-dns-666b6646f7-b8tp9\" (UID: \"81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a\") " pod="openstack/dnsmasq-dns-666b6646f7-b8tp9" Feb 17 18:00:18 crc kubenswrapper[4680]: I0217 18:00:18.872246 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-b8tp9\" (UID: \"81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a\") " pod="openstack/dnsmasq-dns-666b6646f7-b8tp9" Feb 17 18:00:18 crc kubenswrapper[4680]: I0217 18:00:18.872297 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjcw9\" (UniqueName: \"kubernetes.io/projected/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-kube-api-access-qjcw9\") pod \"dnsmasq-dns-666b6646f7-b8tp9\" (UID: \"81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a\") " pod="openstack/dnsmasq-dns-666b6646f7-b8tp9" Feb 17 18:00:18 crc kubenswrapper[4680]: I0217 18:00:18.974041 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjcw9\" (UniqueName: \"kubernetes.io/projected/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-kube-api-access-qjcw9\") pod \"dnsmasq-dns-666b6646f7-b8tp9\" (UID: \"81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a\") " pod="openstack/dnsmasq-dns-666b6646f7-b8tp9" Feb 17 18:00:18 crc kubenswrapper[4680]: I0217 18:00:18.974120 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-config\") pod \"dnsmasq-dns-666b6646f7-b8tp9\" (UID: \"81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a\") " pod="openstack/dnsmasq-dns-666b6646f7-b8tp9" Feb 17 18:00:18 crc kubenswrapper[4680]: I0217 18:00:18.974242 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-b8tp9\" (UID: \"81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a\") " pod="openstack/dnsmasq-dns-666b6646f7-b8tp9" Feb 17 18:00:18 crc kubenswrapper[4680]: I0217 18:00:18.975006 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-config\") pod \"dnsmasq-dns-666b6646f7-b8tp9\" (UID: \"81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a\") " pod="openstack/dnsmasq-dns-666b6646f7-b8tp9" Feb 17 18:00:18 crc kubenswrapper[4680]: I0217 18:00:18.975046 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-b8tp9\" (UID: \"81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a\") " pod="openstack/dnsmasq-dns-666b6646f7-b8tp9" Feb 17 18:00:19 crc kubenswrapper[4680]: I0217 18:00:19.013756 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjcw9\" (UniqueName: \"kubernetes.io/projected/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-kube-api-access-qjcw9\") pod \"dnsmasq-dns-666b6646f7-b8tp9\" (UID: \"81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a\") " pod="openstack/dnsmasq-dns-666b6646f7-b8tp9" Feb 17 18:00:19 crc kubenswrapper[4680]: I0217 18:00:19.115790 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-b8tp9" Feb 17 18:00:19 crc kubenswrapper[4680]: I0217 18:00:19.407158 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fhwrg"] Feb 17 18:00:19 crc kubenswrapper[4680]: I0217 18:00:19.471599 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4z7hf"] Feb 17 18:00:19 crc kubenswrapper[4680]: I0217 18:00:19.472761 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4z7hf" Feb 17 18:00:19 crc kubenswrapper[4680]: I0217 18:00:19.498504 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4z7hf"] Feb 17 18:00:19 crc kubenswrapper[4680]: I0217 18:00:19.582048 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnrx4\" (UniqueName: \"kubernetes.io/projected/a108e860-f849-47e3-b9d7-26fe735925db-kube-api-access-bnrx4\") pod \"dnsmasq-dns-57d769cc4f-4z7hf\" (UID: \"a108e860-f849-47e3-b9d7-26fe735925db\") " pod="openstack/dnsmasq-dns-57d769cc4f-4z7hf" Feb 17 18:00:19 crc kubenswrapper[4680]: I0217 18:00:19.582133 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a108e860-f849-47e3-b9d7-26fe735925db-config\") pod \"dnsmasq-dns-57d769cc4f-4z7hf\" (UID: \"a108e860-f849-47e3-b9d7-26fe735925db\") " pod="openstack/dnsmasq-dns-57d769cc4f-4z7hf" Feb 17 18:00:19 crc kubenswrapper[4680]: I0217 18:00:19.582150 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a108e860-f849-47e3-b9d7-26fe735925db-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4z7hf\" (UID: \"a108e860-f849-47e3-b9d7-26fe735925db\") " pod="openstack/dnsmasq-dns-57d769cc4f-4z7hf" Feb 17 18:00:19 crc kubenswrapper[4680]: I0217 18:00:19.683924 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a108e860-f849-47e3-b9d7-26fe735925db-config\") pod \"dnsmasq-dns-57d769cc4f-4z7hf\" (UID: \"a108e860-f849-47e3-b9d7-26fe735925db\") " pod="openstack/dnsmasq-dns-57d769cc4f-4z7hf" Feb 17 18:00:19 crc kubenswrapper[4680]: I0217 18:00:19.684736 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a108e860-f849-47e3-b9d7-26fe735925db-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4z7hf\" (UID: \"a108e860-f849-47e3-b9d7-26fe735925db\") " pod="openstack/dnsmasq-dns-57d769cc4f-4z7hf" Feb 17 18:00:19 crc kubenswrapper[4680]: I0217 18:00:19.685052 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnrx4\" (UniqueName: \"kubernetes.io/projected/a108e860-f849-47e3-b9d7-26fe735925db-kube-api-access-bnrx4\") pod \"dnsmasq-dns-57d769cc4f-4z7hf\" (UID: \"a108e860-f849-47e3-b9d7-26fe735925db\") " pod="openstack/dnsmasq-dns-57d769cc4f-4z7hf" Feb 17 18:00:19 crc kubenswrapper[4680]: I0217 18:00:19.685910 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a108e860-f849-47e3-b9d7-26fe735925db-config\") pod \"dnsmasq-dns-57d769cc4f-4z7hf\" (UID: \"a108e860-f849-47e3-b9d7-26fe735925db\") " pod="openstack/dnsmasq-dns-57d769cc4f-4z7hf" Feb 17 18:00:19 crc kubenswrapper[4680]: I0217 18:00:19.686898 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a108e860-f849-47e3-b9d7-26fe735925db-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4z7hf\" (UID: \"a108e860-f849-47e3-b9d7-26fe735925db\") " pod="openstack/dnsmasq-dns-57d769cc4f-4z7hf" Feb 17 18:00:19 crc kubenswrapper[4680]: I0217 18:00:19.721500 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnrx4\" (UniqueName: \"kubernetes.io/projected/a108e860-f849-47e3-b9d7-26fe735925db-kube-api-access-bnrx4\") pod \"dnsmasq-dns-57d769cc4f-4z7hf\" (UID: \"a108e860-f849-47e3-b9d7-26fe735925db\") " pod="openstack/dnsmasq-dns-57d769cc4f-4z7hf" Feb 17 18:00:19 crc kubenswrapper[4680]: I0217 18:00:19.794654 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4z7hf" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.118463 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.120126 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.138958 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-lz7x5" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.139184 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.139363 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.139484 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.139621 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.139805 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.159096 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.159303 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-b8tp9"] Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.165514 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.309604 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.310001 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81eeaf28-d5ac-44e3-bad2-215fc89f4895-pod-info\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.310115 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.310239 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.310451 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-config-data\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.310564 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjcvg\" (UniqueName: \"kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-kube-api-access-vjcvg\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.310742 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.310921 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.311081 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-server-conf\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.311230 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.311361 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81eeaf28-d5ac-44e3-bad2-215fc89f4895-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.368553 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-b8tp9" event={"ID":"81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a","Type":"ContainerStarted","Data":"d36fcb08f7db7aba147b8595a40fb92da89eddba2d5b7091e33d974cd3b3b5b8"} Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.412385 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.412455 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81eeaf28-d5ac-44e3-bad2-215fc89f4895-pod-info\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.412486 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.412535 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.412561 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-config-data\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.412587 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjcvg\" (UniqueName: \"kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-kube-api-access-vjcvg\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.412634 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.412683 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.412704 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-server-conf\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.412725 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.412743 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81eeaf28-d5ac-44e3-bad2-215fc89f4895-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.412967 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.413586 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.414415 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.419495 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-config-data\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.422569 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.422900 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-server-conf\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.425990 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81eeaf28-d5ac-44e3-bad2-215fc89f4895-pod-info\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.427789 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.428003 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81eeaf28-d5ac-44e3-bad2-215fc89f4895-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.429551 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.441635 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjcvg\" (UniqueName: \"kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-kube-api-access-vjcvg\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.532992 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.544414 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4z7hf"] Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.674377 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.677726 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.680478 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.690626 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.696406 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.696685 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7lfbg" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.696954 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.697182 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.697439 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.705111 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.819043 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.825378 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.825470 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.825538 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.825606 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.825656 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d9f1e89-7561-458e-9189-73106b757079-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.825691 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.825784 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.825836 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd6nq\" (UniqueName: \"kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-kube-api-access-kd6nq\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.825875 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.825940 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d9f1e89-7561-458e-9189-73106b757079-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.826001 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.932050 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.932149 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.932201 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.932229 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.932253 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d9f1e89-7561-458e-9189-73106b757079-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.932290 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.932399 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.932442 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd6nq\" (UniqueName: \"kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-kube-api-access-kd6nq\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.932483 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.932533 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d9f1e89-7561-458e-9189-73106b757079-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.932577 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.933210 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.933895 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.934747 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.941080 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.942366 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.943172 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.946977 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d9f1e89-7561-458e-9189-73106b757079-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.949949 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.950761 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.950859 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d9f1e89-7561-458e-9189-73106b757079-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.959849 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd6nq\" (UniqueName: \"kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-kube-api-access-kd6nq\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:20 crc kubenswrapper[4680]: I0217 18:00:20.975206 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.039108 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.341704 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.343293 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.347653 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.348077 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.348228 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-4k4zg" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.348368 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.359403 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.375203 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.430399 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4z7hf" event={"ID":"a108e860-f849-47e3-b9d7-26fe735925db","Type":"ContainerStarted","Data":"a6b19cfb87566caceb8089ce6a4697f25c12eb396bb9e8c69b212dd712950bc9"} Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.441152 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-kolla-config\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.441304 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.441363 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.441390 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.441430 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-config-data-default\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.441466 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.441491 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.441521 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55n6n\" (UniqueName: \"kubernetes.io/projected/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-kube-api-access-55n6n\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.538894 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.544923 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-kolla-config\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.545041 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.545101 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.545153 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.545380 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-config-data-default\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.546987 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-kolla-config\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.547003 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.547046 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.547091 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55n6n\" (UniqueName: \"kubernetes.io/projected/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-kube-api-access-55n6n\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.549348 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-config-data-default\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.549557 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.549998 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.550347 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.555590 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.559390 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.591022 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55n6n\" (UniqueName: \"kubernetes.io/projected/92bcbcf3-8caf-4493-99ac-478e3d55a1c1-kube-api-access-55n6n\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.627927 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"92bcbcf3-8caf-4493-99ac-478e3d55a1c1\") " pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.681908 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 18:00:21 crc kubenswrapper[4680]: I0217 18:00:21.780724 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 18:00:21 crc kubenswrapper[4680]: W0217 18:00:21.845067 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d9f1e89_7561_458e_9189_73106b757079.slice/crio-5ed5723d3c36f4286b9ee118d73b7290f00c2fcef9da608837a6f6252c42625d WatchSource:0}: Error finding container 5ed5723d3c36f4286b9ee118d73b7290f00c2fcef9da608837a6f6252c42625d: Status 404 returned error can't find the container with id 5ed5723d3c36f4286b9ee118d73b7290f00c2fcef9da608837a6f6252c42625d Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.449730 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"81eeaf28-d5ac-44e3-bad2-215fc89f4895","Type":"ContainerStarted","Data":"f0359a65c84df44330a51a7f1c8d1ac418f4b5e09a142d8a17e25a4856b26268"} Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.461021 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.482929 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d9f1e89-7561-458e-9189-73106b757079","Type":"ContainerStarted","Data":"5ed5723d3c36f4286b9ee118d73b7290f00c2fcef9da608837a6f6252c42625d"} Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.773843 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.776982 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.834986 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.835807 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-5sbrh" Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.859657 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.859919 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.916732 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.934782 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnn6l\" (UniqueName: \"kubernetes.io/projected/408a63fd-0033-4bf6-925a-cc82a63b2a60-kube-api-access-wnn6l\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.935038 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/408a63fd-0033-4bf6-925a-cc82a63b2a60-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.935128 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/408a63fd-0033-4bf6-925a-cc82a63b2a60-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.935270 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/408a63fd-0033-4bf6-925a-cc82a63b2a60-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.935419 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/408a63fd-0033-4bf6-925a-cc82a63b2a60-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.942445 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/408a63fd-0033-4bf6-925a-cc82a63b2a60-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.942686 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:22 crc kubenswrapper[4680]: I0217 18:00:22.942821 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/408a63fd-0033-4bf6-925a-cc82a63b2a60-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.045131 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnn6l\" (UniqueName: \"kubernetes.io/projected/408a63fd-0033-4bf6-925a-cc82a63b2a60-kube-api-access-wnn6l\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.045182 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/408a63fd-0033-4bf6-925a-cc82a63b2a60-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.045205 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/408a63fd-0033-4bf6-925a-cc82a63b2a60-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.045246 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/408a63fd-0033-4bf6-925a-cc82a63b2a60-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.045274 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/408a63fd-0033-4bf6-925a-cc82a63b2a60-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.045298 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/408a63fd-0033-4bf6-925a-cc82a63b2a60-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.045334 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.045360 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/408a63fd-0033-4bf6-925a-cc82a63b2a60-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.046538 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/408a63fd-0033-4bf6-925a-cc82a63b2a60-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.047211 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/408a63fd-0033-4bf6-925a-cc82a63b2a60-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.047436 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/408a63fd-0033-4bf6-925a-cc82a63b2a60-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.055994 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/408a63fd-0033-4bf6-925a-cc82a63b2a60-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.056277 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.058496 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/408a63fd-0033-4bf6-925a-cc82a63b2a60-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.067469 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/408a63fd-0033-4bf6-925a-cc82a63b2a60-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.103191 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.171594 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnn6l\" (UniqueName: \"kubernetes.io/projected/408a63fd-0033-4bf6-925a-cc82a63b2a60-kube-api-access-wnn6l\") pod \"openstack-cell1-galera-0\" (UID: \"408a63fd-0033-4bf6-925a-cc82a63b2a60\") " pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.193913 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.295825 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.297037 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.315141 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.315711 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-lvdfm" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.315840 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.335556 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.386466 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxgh7\" (UniqueName: \"kubernetes.io/projected/391eec5a-7bea-47ed-963a-8152e1db2357-kube-api-access-nxgh7\") pod \"memcached-0\" (UID: \"391eec5a-7bea-47ed-963a-8152e1db2357\") " pod="openstack/memcached-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.386536 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/391eec5a-7bea-47ed-963a-8152e1db2357-combined-ca-bundle\") pod \"memcached-0\" (UID: \"391eec5a-7bea-47ed-963a-8152e1db2357\") " pod="openstack/memcached-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.386583 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/391eec5a-7bea-47ed-963a-8152e1db2357-memcached-tls-certs\") pod \"memcached-0\" (UID: \"391eec5a-7bea-47ed-963a-8152e1db2357\") " pod="openstack/memcached-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.386628 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/391eec5a-7bea-47ed-963a-8152e1db2357-config-data\") pod \"memcached-0\" (UID: \"391eec5a-7bea-47ed-963a-8152e1db2357\") " pod="openstack/memcached-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.386715 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/391eec5a-7bea-47ed-963a-8152e1db2357-kolla-config\") pod \"memcached-0\" (UID: \"391eec5a-7bea-47ed-963a-8152e1db2357\") " pod="openstack/memcached-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.489159 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/391eec5a-7bea-47ed-963a-8152e1db2357-config-data\") pod \"memcached-0\" (UID: \"391eec5a-7bea-47ed-963a-8152e1db2357\") " pod="openstack/memcached-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.489283 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/391eec5a-7bea-47ed-963a-8152e1db2357-kolla-config\") pod \"memcached-0\" (UID: \"391eec5a-7bea-47ed-963a-8152e1db2357\") " pod="openstack/memcached-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.489325 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxgh7\" (UniqueName: \"kubernetes.io/projected/391eec5a-7bea-47ed-963a-8152e1db2357-kube-api-access-nxgh7\") pod \"memcached-0\" (UID: \"391eec5a-7bea-47ed-963a-8152e1db2357\") " pod="openstack/memcached-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.489361 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/391eec5a-7bea-47ed-963a-8152e1db2357-combined-ca-bundle\") pod \"memcached-0\" (UID: \"391eec5a-7bea-47ed-963a-8152e1db2357\") " pod="openstack/memcached-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.489405 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/391eec5a-7bea-47ed-963a-8152e1db2357-memcached-tls-certs\") pod \"memcached-0\" (UID: \"391eec5a-7bea-47ed-963a-8152e1db2357\") " pod="openstack/memcached-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.490295 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/391eec5a-7bea-47ed-963a-8152e1db2357-config-data\") pod \"memcached-0\" (UID: \"391eec5a-7bea-47ed-963a-8152e1db2357\") " pod="openstack/memcached-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.490425 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/391eec5a-7bea-47ed-963a-8152e1db2357-kolla-config\") pod \"memcached-0\" (UID: \"391eec5a-7bea-47ed-963a-8152e1db2357\") " pod="openstack/memcached-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.523035 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxgh7\" (UniqueName: \"kubernetes.io/projected/391eec5a-7bea-47ed-963a-8152e1db2357-kube-api-access-nxgh7\") pod \"memcached-0\" (UID: \"391eec5a-7bea-47ed-963a-8152e1db2357\") " pod="openstack/memcached-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.524127 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/391eec5a-7bea-47ed-963a-8152e1db2357-combined-ca-bundle\") pod \"memcached-0\" (UID: \"391eec5a-7bea-47ed-963a-8152e1db2357\") " pod="openstack/memcached-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.532396 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/391eec5a-7bea-47ed-963a-8152e1db2357-memcached-tls-certs\") pod \"memcached-0\" (UID: \"391eec5a-7bea-47ed-963a-8152e1db2357\") " pod="openstack/memcached-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.577182 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92bcbcf3-8caf-4493-99ac-478e3d55a1c1","Type":"ContainerStarted","Data":"0b23b8be94007162a0c86ffa060855dd51561500d41b32b591b2d893d48e630e"} Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.644515 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 18:00:23 crc kubenswrapper[4680]: I0217 18:00:23.835535 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 18:00:24 crc kubenswrapper[4680]: W0217 18:00:24.028976 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod408a63fd_0033_4bf6_925a_cc82a63b2a60.slice/crio-848b1efd54a7e8d5a66b7ec281e22df84c30e67a29584b5d014b3f34c704b1b0 WatchSource:0}: Error finding container 848b1efd54a7e8d5a66b7ec281e22df84c30e67a29584b5d014b3f34c704b1b0: Status 404 returned error can't find the container with id 848b1efd54a7e8d5a66b7ec281e22df84c30e67a29584b5d014b3f34c704b1b0 Feb 17 18:00:24 crc kubenswrapper[4680]: I0217 18:00:24.634432 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"408a63fd-0033-4bf6-925a-cc82a63b2a60","Type":"ContainerStarted","Data":"848b1efd54a7e8d5a66b7ec281e22df84c30e67a29584b5d014b3f34c704b1b0"} Feb 17 18:00:24 crc kubenswrapper[4680]: I0217 18:00:24.684715 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 18:00:25 crc kubenswrapper[4680]: I0217 18:00:25.248784 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 18:00:25 crc kubenswrapper[4680]: I0217 18:00:25.249659 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 18:00:25 crc kubenswrapper[4680]: I0217 18:00:25.259255 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-bwh7m" Feb 17 18:00:25 crc kubenswrapper[4680]: I0217 18:00:25.277520 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 18:00:25 crc kubenswrapper[4680]: I0217 18:00:25.344256 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grhpz\" (UniqueName: \"kubernetes.io/projected/6b425c86-a83e-4258-8432-83308132d038-kube-api-access-grhpz\") pod \"kube-state-metrics-0\" (UID: \"6b425c86-a83e-4258-8432-83308132d038\") " pod="openstack/kube-state-metrics-0" Feb 17 18:00:25 crc kubenswrapper[4680]: I0217 18:00:25.448481 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grhpz\" (UniqueName: \"kubernetes.io/projected/6b425c86-a83e-4258-8432-83308132d038-kube-api-access-grhpz\") pod \"kube-state-metrics-0\" (UID: \"6b425c86-a83e-4258-8432-83308132d038\") " pod="openstack/kube-state-metrics-0" Feb 17 18:00:25 crc kubenswrapper[4680]: I0217 18:00:25.508627 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grhpz\" (UniqueName: \"kubernetes.io/projected/6b425c86-a83e-4258-8432-83308132d038-kube-api-access-grhpz\") pod \"kube-state-metrics-0\" (UID: \"6b425c86-a83e-4258-8432-83308132d038\") " pod="openstack/kube-state-metrics-0" Feb 17 18:00:25 crc kubenswrapper[4680]: I0217 18:00:25.587713 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 18:00:25 crc kubenswrapper[4680]: I0217 18:00:25.660397 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"391eec5a-7bea-47ed-963a-8152e1db2357","Type":"ContainerStarted","Data":"0fc650643689627273bda6592f68404263a124cb57d7f855a4f7fef3b9185518"} Feb 17 18:00:26 crc kubenswrapper[4680]: I0217 18:00:26.997263 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 18:00:27 crc kubenswrapper[4680]: W0217 18:00:27.013225 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b425c86_a83e_4258_8432_83308132d038.slice/crio-529bcfeee2a08fcbc70f19c1b54859239e92ec6e7312ab09cbf3fc893dae4762 WatchSource:0}: Error finding container 529bcfeee2a08fcbc70f19c1b54859239e92ec6e7312ab09cbf3fc893dae4762: Status 404 returned error can't find the container with id 529bcfeee2a08fcbc70f19c1b54859239e92ec6e7312ab09cbf3fc893dae4762 Feb 17 18:00:27 crc kubenswrapper[4680]: I0217 18:00:27.695625 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b425c86-a83e-4258-8432-83308132d038","Type":"ContainerStarted","Data":"529bcfeee2a08fcbc70f19c1b54859239e92ec6e7312ab09cbf3fc893dae4762"} Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.083815 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7dz9r"] Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.084812 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.117011 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nxvzp" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.132224 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.132814 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.163277 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-grk2z"] Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.164754 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rv6q\" (UniqueName: \"kubernetes.io/projected/2ce593ef-6a97-4361-af72-7637711c07b5-kube-api-access-8rv6q\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.165127 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.165455 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ce593ef-6a97-4361-af72-7637711c07b5-scripts\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.165662 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce593ef-6a97-4361-af72-7637711c07b5-combined-ca-bundle\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.165855 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce593ef-6a97-4361-af72-7637711c07b5-ovn-controller-tls-certs\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.167243 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7dz9r"] Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.167243 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2ce593ef-6a97-4361-af72-7637711c07b5-var-run\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.167639 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ce593ef-6a97-4361-af72-7637711c07b5-var-run-ovn\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.167873 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2ce593ef-6a97-4361-af72-7637711c07b5-var-log-ovn\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.208068 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-grk2z"] Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.273051 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8f069d89-4464-4526-ba2c-5d4a9de13e02-etc-ovs\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.273382 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2ce593ef-6a97-4361-af72-7637711c07b5-var-run\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.273421 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ce593ef-6a97-4361-af72-7637711c07b5-var-run-ovn\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.273451 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8f069d89-4464-4526-ba2c-5d4a9de13e02-var-run\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.273480 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2ce593ef-6a97-4361-af72-7637711c07b5-var-log-ovn\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.273537 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rv6q\" (UniqueName: \"kubernetes.io/projected/2ce593ef-6a97-4361-af72-7637711c07b5-kube-api-access-8rv6q\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.273563 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8f069d89-4464-4526-ba2c-5d4a9de13e02-var-log\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.273608 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ce593ef-6a97-4361-af72-7637711c07b5-scripts\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.273634 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce593ef-6a97-4361-af72-7637711c07b5-combined-ca-bundle\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.273656 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8f069d89-4464-4526-ba2c-5d4a9de13e02-var-lib\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.273705 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce593ef-6a97-4361-af72-7637711c07b5-ovn-controller-tls-certs\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.273720 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq568\" (UniqueName: \"kubernetes.io/projected/8f069d89-4464-4526-ba2c-5d4a9de13e02-kube-api-access-vq568\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.273745 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f069d89-4464-4526-ba2c-5d4a9de13e02-scripts\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.275349 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2ce593ef-6a97-4361-af72-7637711c07b5-var-run\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.276250 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ce593ef-6a97-4361-af72-7637711c07b5-var-run-ovn\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.276942 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2ce593ef-6a97-4361-af72-7637711c07b5-var-log-ovn\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.277023 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ce593ef-6a97-4361-af72-7637711c07b5-scripts\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.299720 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce593ef-6a97-4361-af72-7637711c07b5-ovn-controller-tls-certs\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.299954 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce593ef-6a97-4361-af72-7637711c07b5-combined-ca-bundle\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.309499 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rv6q\" (UniqueName: \"kubernetes.io/projected/2ce593ef-6a97-4361-af72-7637711c07b5-kube-api-access-8rv6q\") pod \"ovn-controller-7dz9r\" (UID: \"2ce593ef-6a97-4361-af72-7637711c07b5\") " pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.375245 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8f069d89-4464-4526-ba2c-5d4a9de13e02-var-log\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.375674 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8f069d89-4464-4526-ba2c-5d4a9de13e02-var-log\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.375980 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8f069d89-4464-4526-ba2c-5d4a9de13e02-var-lib\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.376520 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8f069d89-4464-4526-ba2c-5d4a9de13e02-var-lib\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.376576 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq568\" (UniqueName: \"kubernetes.io/projected/8f069d89-4464-4526-ba2c-5d4a9de13e02-kube-api-access-vq568\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.376598 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f069d89-4464-4526-ba2c-5d4a9de13e02-scripts\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.376655 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8f069d89-4464-4526-ba2c-5d4a9de13e02-etc-ovs\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.376717 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8f069d89-4464-4526-ba2c-5d4a9de13e02-var-run\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.376812 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8f069d89-4464-4526-ba2c-5d4a9de13e02-var-run\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.382593 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8f069d89-4464-4526-ba2c-5d4a9de13e02-etc-ovs\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.387140 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f069d89-4464-4526-ba2c-5d4a9de13e02-scripts\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.398561 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq568\" (UniqueName: \"kubernetes.io/projected/8f069d89-4464-4526-ba2c-5d4a9de13e02-kube-api-access-vq568\") pod \"ovn-controller-ovs-grk2z\" (UID: \"8f069d89-4464-4526-ba2c-5d4a9de13e02\") " pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.455131 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7dz9r" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.501066 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.986677 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.988925 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.994684 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.994812 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.994865 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.995574 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 17 18:00:29 crc kubenswrapper[4680]: I0217 18:00:29.995755 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-js2tk" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.005709 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.097310 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6ba0d348-8721-4d4f-9904-994b1ae56423-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.097500 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba0d348-8721-4d4f-9904-994b1ae56423-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.099003 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba0d348-8721-4d4f-9904-994b1ae56423-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.099138 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.099222 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba0d348-8721-4d4f-9904-994b1ae56423-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.099283 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ba0d348-8721-4d4f-9904-994b1ae56423-config\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.099807 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ba0d348-8721-4d4f-9904-994b1ae56423-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.099854 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srhkg\" (UniqueName: \"kubernetes.io/projected/6ba0d348-8721-4d4f-9904-994b1ae56423-kube-api-access-srhkg\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.201300 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.201367 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba0d348-8721-4d4f-9904-994b1ae56423-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.201423 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ba0d348-8721-4d4f-9904-994b1ae56423-config\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.201464 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ba0d348-8721-4d4f-9904-994b1ae56423-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.201508 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srhkg\" (UniqueName: \"kubernetes.io/projected/6ba0d348-8721-4d4f-9904-994b1ae56423-kube-api-access-srhkg\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.201722 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6ba0d348-8721-4d4f-9904-994b1ae56423-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.201762 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba0d348-8721-4d4f-9904-994b1ae56423-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.201783 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba0d348-8721-4d4f-9904-994b1ae56423-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.203542 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.204068 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6ba0d348-8721-4d4f-9904-994b1ae56423-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.205443 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ba0d348-8721-4d4f-9904-994b1ae56423-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.207600 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ba0d348-8721-4d4f-9904-994b1ae56423-config\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.221476 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba0d348-8721-4d4f-9904-994b1ae56423-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.223155 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba0d348-8721-4d4f-9904-994b1ae56423-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.225115 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srhkg\" (UniqueName: \"kubernetes.io/projected/6ba0d348-8721-4d4f-9904-994b1ae56423-kube-api-access-srhkg\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.225826 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba0d348-8721-4d4f-9904-994b1ae56423-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.245006 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6ba0d348-8721-4d4f-9904-994b1ae56423\") " pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.312457 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.388093 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7dz9r"] Feb 17 18:00:30 crc kubenswrapper[4680]: W0217 18:00:30.413781 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ce593ef_6a97_4361_af72_7637711c07b5.slice/crio-25bff4835d2c266878013bb944435cc6e815ae0a38d70fcd1214337f4aacfaf1 WatchSource:0}: Error finding container 25bff4835d2c266878013bb944435cc6e815ae0a38d70fcd1214337f4aacfaf1: Status 404 returned error can't find the container with id 25bff4835d2c266878013bb944435cc6e815ae0a38d70fcd1214337f4aacfaf1 Feb 17 18:00:30 crc kubenswrapper[4680]: I0217 18:00:30.779387 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7dz9r" event={"ID":"2ce593ef-6a97-4361-af72-7637711c07b5","Type":"ContainerStarted","Data":"25bff4835d2c266878013bb944435cc6e815ae0a38d70fcd1214337f4aacfaf1"} Feb 17 18:00:32 crc kubenswrapper[4680]: I0217 18:00:32.832734 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 18:00:32 crc kubenswrapper[4680]: I0217 18:00:32.834913 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:32 crc kubenswrapper[4680]: I0217 18:00:32.838167 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 17 18:00:32 crc kubenswrapper[4680]: I0217 18:00:32.838462 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-5v4vl" Feb 17 18:00:32 crc kubenswrapper[4680]: I0217 18:00:32.838606 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 17 18:00:32 crc kubenswrapper[4680]: I0217 18:00:32.838753 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 17 18:00:32 crc kubenswrapper[4680]: I0217 18:00:32.851809 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 18:00:32 crc kubenswrapper[4680]: I0217 18:00:32.974431 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:32 crc kubenswrapper[4680]: I0217 18:00:32.974573 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpr2k\" (UniqueName: \"kubernetes.io/projected/e083ffca-d900-4b1a-bc3a-eacae2a81902-kube-api-access-cpr2k\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:32 crc kubenswrapper[4680]: I0217 18:00:32.974601 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e083ffca-d900-4b1a-bc3a-eacae2a81902-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:32 crc kubenswrapper[4680]: I0217 18:00:32.974629 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e083ffca-d900-4b1a-bc3a-eacae2a81902-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:32 crc kubenswrapper[4680]: I0217 18:00:32.974677 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e083ffca-d900-4b1a-bc3a-eacae2a81902-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:32 crc kubenswrapper[4680]: I0217 18:00:32.974715 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e083ffca-d900-4b1a-bc3a-eacae2a81902-config\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:32 crc kubenswrapper[4680]: I0217 18:00:32.974741 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e083ffca-d900-4b1a-bc3a-eacae2a81902-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:32 crc kubenswrapper[4680]: I0217 18:00:32.974759 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e083ffca-d900-4b1a-bc3a-eacae2a81902-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.086428 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e083ffca-d900-4b1a-bc3a-eacae2a81902-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.086497 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e083ffca-d900-4b1a-bc3a-eacae2a81902-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.086552 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e083ffca-d900-4b1a-bc3a-eacae2a81902-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.086593 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e083ffca-d900-4b1a-bc3a-eacae2a81902-config\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.086642 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e083ffca-d900-4b1a-bc3a-eacae2a81902-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.086660 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e083ffca-d900-4b1a-bc3a-eacae2a81902-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.086738 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.086762 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpr2k\" (UniqueName: \"kubernetes.io/projected/e083ffca-d900-4b1a-bc3a-eacae2a81902-kube-api-access-cpr2k\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.088761 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.089916 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e083ffca-d900-4b1a-bc3a-eacae2a81902-config\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.090712 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e083ffca-d900-4b1a-bc3a-eacae2a81902-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.091706 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e083ffca-d900-4b1a-bc3a-eacae2a81902-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.113531 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e083ffca-d900-4b1a-bc3a-eacae2a81902-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.113689 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e083ffca-d900-4b1a-bc3a-eacae2a81902-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.113754 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e083ffca-d900-4b1a-bc3a-eacae2a81902-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.119066 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpr2k\" (UniqueName: \"kubernetes.io/projected/e083ffca-d900-4b1a-bc3a-eacae2a81902-kube-api-access-cpr2k\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.119245 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e083ffca-d900-4b1a-bc3a-eacae2a81902\") " pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.162670 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.787283 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 18:00:33 crc kubenswrapper[4680]: I0217 18:00:33.893201 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 18:00:34 crc kubenswrapper[4680]: I0217 18:00:34.416480 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-grk2z"] Feb 17 18:00:37 crc kubenswrapper[4680]: W0217 18:00:37.716057 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f069d89_4464_4526_ba2c_5d4a9de13e02.slice/crio-45a129039630020d4869214aba13a35eb3db8fe1d8c1014d081ec7579fc975bc WatchSource:0}: Error finding container 45a129039630020d4869214aba13a35eb3db8fe1d8c1014d081ec7579fc975bc: Status 404 returned error can't find the container with id 45a129039630020d4869214aba13a35eb3db8fe1d8c1014d081ec7579fc975bc Feb 17 18:00:37 crc kubenswrapper[4680]: I0217 18:00:37.926694 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6ba0d348-8721-4d4f-9904-994b1ae56423","Type":"ContainerStarted","Data":"f81bab464a1a8ebb22cb95f8b94bbcd8241667fd5cac5bfa614d71e2f628c24e"} Feb 17 18:00:37 crc kubenswrapper[4680]: I0217 18:00:37.928032 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e083ffca-d900-4b1a-bc3a-eacae2a81902","Type":"ContainerStarted","Data":"1e442fd04fede3182005b9ecfb511a2672e8bc421b3cb5793d8781bb9212c6e2"} Feb 17 18:00:37 crc kubenswrapper[4680]: I0217 18:00:37.928901 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-grk2z" event={"ID":"8f069d89-4464-4526-ba2c-5d4a9de13e02","Type":"ContainerStarted","Data":"45a129039630020d4869214aba13a35eb3db8fe1d8c1014d081ec7579fc975bc"} Feb 17 18:00:45 crc kubenswrapper[4680]: E0217 18:00:45.634473 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/openstack-rabbitmq:r42p" Feb 17 18:00:45 crc kubenswrapper[4680]: E0217 18:00:45.636032 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/lmiccini/openstack-rabbitmq:r42p,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjcvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(81eeaf28-d5ac-44e3-bad2-215fc89f4895): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 18:00:45 crc kubenswrapper[4680]: E0217 18:00:45.637777 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="81eeaf28-d5ac-44e3-bad2-215fc89f4895" Feb 17 18:00:46 crc kubenswrapper[4680]: E0217 18:00:46.012281 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/openstack-rabbitmq:r42p\\\"\"" pod="openstack/rabbitmq-server-0" podUID="81eeaf28-d5ac-44e3-bad2-215fc89f4895" Feb 17 18:00:51 crc kubenswrapper[4680]: E0217 18:00:51.340570 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/openstack-rabbitmq:r42p" Feb 17 18:00:51 crc kubenswrapper[4680]: E0217 18:00:51.341328 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/lmiccini/openstack-rabbitmq:r42p,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kd6nq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(8d9f1e89-7561-458e-9189-73106b757079): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 18:00:51 crc kubenswrapper[4680]: E0217 18:00:51.342787 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="8d9f1e89-7561-458e-9189-73106b757079" Feb 17 18:00:52 crc kubenswrapper[4680]: E0217 18:00:52.061389 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/openstack-rabbitmq:r42p\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="8d9f1e89-7561-458e-9189-73106b757079" Feb 17 18:00:52 crc kubenswrapper[4680]: E0217 18:00:52.083189 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 17 18:00:52 crc kubenswrapper[4680]: E0217 18:00:52.084994 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5c7h6ch86hc4h59fh5c6h54fhc6h57ch547h59h689h57bh67dhdbh68ch5f7h5chd7h599h5d8h67dh5c4h64fh657h5dch6dh55ch5fdh55ch5c4h66dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nxgh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(391eec5a-7bea-47ed-963a-8152e1db2357): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 18:00:52 crc kubenswrapper[4680]: E0217 18:00:52.086682 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="391eec5a-7bea-47ed-963a-8152e1db2357" Feb 17 18:00:52 crc kubenswrapper[4680]: I0217 18:00:52.773291 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-tw5w5"] Feb 17 18:00:52 crc kubenswrapper[4680]: I0217 18:00:52.774604 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:52 crc kubenswrapper[4680]: I0217 18:00:52.782813 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 17 18:00:52 crc kubenswrapper[4680]: I0217 18:00:52.784834 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-tw5w5"] Feb 17 18:00:52 crc kubenswrapper[4680]: I0217 18:00:52.956126 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-b8tp9"] Feb 17 18:00:52 crc kubenswrapper[4680]: I0217 18:00:52.960025 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnjdg\" (UniqueName: \"kubernetes.io/projected/42bb122a-c2c2-4a70-a745-1a6bb9334698-kube-api-access-mnjdg\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:52 crc kubenswrapper[4680]: I0217 18:00:52.960088 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42bb122a-c2c2-4a70-a745-1a6bb9334698-config\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:52 crc kubenswrapper[4680]: I0217 18:00:52.960150 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/42bb122a-c2c2-4a70-a745-1a6bb9334698-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:52 crc kubenswrapper[4680]: I0217 18:00:52.960176 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/42bb122a-c2c2-4a70-a745-1a6bb9334698-ovn-rundir\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:52 crc kubenswrapper[4680]: I0217 18:00:52.960202 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42bb122a-c2c2-4a70-a745-1a6bb9334698-combined-ca-bundle\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:52 crc kubenswrapper[4680]: I0217 18:00:52.960234 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/42bb122a-c2c2-4a70-a745-1a6bb9334698-ovs-rundir\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:52 crc kubenswrapper[4680]: I0217 18:00:52.993733 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bhpg9"] Feb 17 18:00:52 crc kubenswrapper[4680]: I0217 18:00:52.995004 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.000243 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.008544 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bhpg9"] Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.061792 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnjdg\" (UniqueName: \"kubernetes.io/projected/42bb122a-c2c2-4a70-a745-1a6bb9334698-kube-api-access-mnjdg\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.061882 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42bb122a-c2c2-4a70-a745-1a6bb9334698-config\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.061932 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/42bb122a-c2c2-4a70-a745-1a6bb9334698-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.061973 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/42bb122a-c2c2-4a70-a745-1a6bb9334698-ovn-rundir\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.062002 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42bb122a-c2c2-4a70-a745-1a6bb9334698-combined-ca-bundle\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.062037 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/42bb122a-c2c2-4a70-a745-1a6bb9334698-ovs-rundir\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.062418 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/42bb122a-c2c2-4a70-a745-1a6bb9334698-ovs-rundir\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.062462 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/42bb122a-c2c2-4a70-a745-1a6bb9334698-ovn-rundir\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.062529 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42bb122a-c2c2-4a70-a745-1a6bb9334698-config\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.070786 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42bb122a-c2c2-4a70-a745-1a6bb9334698-combined-ca-bundle\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.072815 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/42bb122a-c2c2-4a70-a745-1a6bb9334698-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:53 crc kubenswrapper[4680]: E0217 18:00:53.085083 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="391eec5a-7bea-47ed-963a-8152e1db2357" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.119106 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnjdg\" (UniqueName: \"kubernetes.io/projected/42bb122a-c2c2-4a70-a745-1a6bb9334698-kube-api-access-mnjdg\") pod \"ovn-controller-metrics-tw5w5\" (UID: \"42bb122a-c2c2-4a70-a745-1a6bb9334698\") " pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.163038 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r94cn\" (UniqueName: \"kubernetes.io/projected/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-kube-api-access-r94cn\") pod \"dnsmasq-dns-7fd796d7df-bhpg9\" (UID: \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.163095 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-config\") pod \"dnsmasq-dns-7fd796d7df-bhpg9\" (UID: \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.163134 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-bhpg9\" (UID: \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.163153 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-bhpg9\" (UID: \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.266075 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r94cn\" (UniqueName: \"kubernetes.io/projected/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-kube-api-access-r94cn\") pod \"dnsmasq-dns-7fd796d7df-bhpg9\" (UID: \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.266187 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-config\") pod \"dnsmasq-dns-7fd796d7df-bhpg9\" (UID: \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.266259 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-bhpg9\" (UID: \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.266284 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-bhpg9\" (UID: \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.268533 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-config\") pod \"dnsmasq-dns-7fd796d7df-bhpg9\" (UID: \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.269487 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-bhpg9\" (UID: \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.270102 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-bhpg9\" (UID: \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.273548 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4z7hf"] Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.308332 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r94cn\" (UniqueName: \"kubernetes.io/projected/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-kube-api-access-r94cn\") pod \"dnsmasq-dns-7fd796d7df-bhpg9\" (UID: \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\") " pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.315880 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.327487 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gbc6h"] Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.329173 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.333677 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.392481 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gbc6h"] Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.398877 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tw5w5" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.469663 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-gbc6h\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.469714 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xplgn\" (UniqueName: \"kubernetes.io/projected/e2313798-4510-4dae-a8d1-d4f7c5c08627-kube-api-access-xplgn\") pod \"dnsmasq-dns-86db49b7ff-gbc6h\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.469740 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-gbc6h\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.469786 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-config\") pod \"dnsmasq-dns-86db49b7ff-gbc6h\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.469838 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-gbc6h\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.570867 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-gbc6h\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.570925 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xplgn\" (UniqueName: \"kubernetes.io/projected/e2313798-4510-4dae-a8d1-d4f7c5c08627-kube-api-access-xplgn\") pod \"dnsmasq-dns-86db49b7ff-gbc6h\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.570953 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-gbc6h\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.570994 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-config\") pod \"dnsmasq-dns-86db49b7ff-gbc6h\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.571047 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-gbc6h\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.572472 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-config\") pod \"dnsmasq-dns-86db49b7ff-gbc6h\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.572703 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-gbc6h\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.574218 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-gbc6h\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.574497 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-gbc6h\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.608172 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xplgn\" (UniqueName: \"kubernetes.io/projected/e2313798-4510-4dae-a8d1-d4f7c5c08627-kube-api-access-xplgn\") pod \"dnsmasq-dns-86db49b7ff-gbc6h\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:00:53 crc kubenswrapper[4680]: I0217 18:00:53.667610 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:00:55 crc kubenswrapper[4680]: E0217 18:00:55.331627 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 17 18:00:55 crc kubenswrapper[4680]: E0217 18:00:55.331892 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55n6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(92bcbcf3-8caf-4493-99ac-478e3d55a1c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 18:00:55 crc kubenswrapper[4680]: E0217 18:00:55.333040 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="92bcbcf3-8caf-4493-99ac-478e3d55a1c1" Feb 17 18:00:55 crc kubenswrapper[4680]: E0217 18:00:55.349963 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 17 18:00:55 crc kubenswrapper[4680]: E0217 18:00:55.350129 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnn6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(408a63fd-0033-4bf6-925a-cc82a63b2a60): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 18:00:55 crc kubenswrapper[4680]: E0217 18:00:55.351334 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="408a63fd-0033-4bf6-925a-cc82a63b2a60" Feb 17 18:00:56 crc kubenswrapper[4680]: E0217 18:00:56.058265 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 18:00:56 crc kubenswrapper[4680]: E0217 18:00:56.058458 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qjcw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-b8tp9_openstack(81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 18:00:56 crc kubenswrapper[4680]: E0217 18:00:56.058655 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 18:00:56 crc kubenswrapper[4680]: E0217 18:00:56.058758 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vthv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-fhwrg_openstack(c26964fe-e05b-4195-84f8-469d567de82f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 18:00:56 crc kubenswrapper[4680]: E0217 18:00:56.059672 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-b8tp9" podUID="81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a" Feb 17 18:00:56 crc kubenswrapper[4680]: E0217 18:00:56.060752 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-fhwrg" podUID="c26964fe-e05b-4195-84f8-469d567de82f" Feb 17 18:00:56 crc kubenswrapper[4680]: E0217 18:00:56.073032 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 18:00:56 crc kubenswrapper[4680]: E0217 18:00:56.073178 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnrx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-4z7hf_openstack(a108e860-f849-47e3-b9d7-26fe735925db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 18:00:56 crc kubenswrapper[4680]: E0217 18:00:56.074347 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-4z7hf" podUID="a108e860-f849-47e3-b9d7-26fe735925db" Feb 17 18:00:56 crc kubenswrapper[4680]: E0217 18:00:56.106936 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="408a63fd-0033-4bf6-925a-cc82a63b2a60" Feb 17 18:00:56 crc kubenswrapper[4680]: E0217 18:00:56.109169 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="92bcbcf3-8caf-4493-99ac-478e3d55a1c1" Feb 17 18:00:57 crc kubenswrapper[4680]: E0217 18:00:57.652583 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 18:00:57 crc kubenswrapper[4680]: E0217 18:00:57.653102 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5fd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-h4f7r_openstack(49af63a1-f109-42f7-a96e-d1364799f6be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 18:00:57 crc kubenswrapper[4680]: E0217 18:00:57.654385 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-h4f7r" podUID="49af63a1-f109-42f7-a96e-d1364799f6be" Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.794562 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-b8tp9" Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.806605 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4z7hf" Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.825011 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fhwrg" Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.965935 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnrx4\" (UniqueName: \"kubernetes.io/projected/a108e860-f849-47e3-b9d7-26fe735925db-kube-api-access-bnrx4\") pod \"a108e860-f849-47e3-b9d7-26fe735925db\" (UID: \"a108e860-f849-47e3-b9d7-26fe735925db\") " Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.966038 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c26964fe-e05b-4195-84f8-469d567de82f-dns-svc\") pod \"c26964fe-e05b-4195-84f8-469d567de82f\" (UID: \"c26964fe-e05b-4195-84f8-469d567de82f\") " Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.966285 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjcw9\" (UniqueName: \"kubernetes.io/projected/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-kube-api-access-qjcw9\") pod \"81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a\" (UID: \"81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a\") " Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.966304 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a108e860-f849-47e3-b9d7-26fe735925db-dns-svc\") pod \"a108e860-f849-47e3-b9d7-26fe735925db\" (UID: \"a108e860-f849-47e3-b9d7-26fe735925db\") " Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.966362 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-config\") pod \"81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a\" (UID: \"81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a\") " Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.966377 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c26964fe-e05b-4195-84f8-469d567de82f-config\") pod \"c26964fe-e05b-4195-84f8-469d567de82f\" (UID: \"c26964fe-e05b-4195-84f8-469d567de82f\") " Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.966417 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-dns-svc\") pod \"81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a\" (UID: \"81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a\") " Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.966439 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a108e860-f849-47e3-b9d7-26fe735925db-config\") pod \"a108e860-f849-47e3-b9d7-26fe735925db\" (UID: \"a108e860-f849-47e3-b9d7-26fe735925db\") " Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.966463 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vthv5\" (UniqueName: \"kubernetes.io/projected/c26964fe-e05b-4195-84f8-469d567de82f-kube-api-access-vthv5\") pod \"c26964fe-e05b-4195-84f8-469d567de82f\" (UID: \"c26964fe-e05b-4195-84f8-469d567de82f\") " Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.966723 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a108e860-f849-47e3-b9d7-26fe735925db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a108e860-f849-47e3-b9d7-26fe735925db" (UID: "a108e860-f849-47e3-b9d7-26fe735925db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.966859 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a108e860-f849-47e3-b9d7-26fe735925db-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.967217 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a108e860-f849-47e3-b9d7-26fe735925db-config" (OuterVolumeSpecName: "config") pod "a108e860-f849-47e3-b9d7-26fe735925db" (UID: "a108e860-f849-47e3-b9d7-26fe735925db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.967399 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-config" (OuterVolumeSpecName: "config") pod "81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a" (UID: "81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.967898 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a" (UID: "81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.972841 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-kube-api-access-qjcw9" (OuterVolumeSpecName: "kube-api-access-qjcw9") pod "81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a" (UID: "81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a"). InnerVolumeSpecName "kube-api-access-qjcw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:00:57 crc kubenswrapper[4680]: I0217 18:00:57.973781 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a108e860-f849-47e3-b9d7-26fe735925db-kube-api-access-bnrx4" (OuterVolumeSpecName: "kube-api-access-bnrx4") pod "a108e860-f849-47e3-b9d7-26fe735925db" (UID: "a108e860-f849-47e3-b9d7-26fe735925db"). InnerVolumeSpecName "kube-api-access-bnrx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.041706 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c26964fe-e05b-4195-84f8-469d567de82f-kube-api-access-vthv5" (OuterVolumeSpecName: "kube-api-access-vthv5") pod "c26964fe-e05b-4195-84f8-469d567de82f" (UID: "c26964fe-e05b-4195-84f8-469d567de82f"). InnerVolumeSpecName "kube-api-access-vthv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.041812 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26964fe-e05b-4195-84f8-469d567de82f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c26964fe-e05b-4195-84f8-469d567de82f" (UID: "c26964fe-e05b-4195-84f8-469d567de82f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.042062 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26964fe-e05b-4195-84f8-469d567de82f-config" (OuterVolumeSpecName: "config") pod "c26964fe-e05b-4195-84f8-469d567de82f" (UID: "c26964fe-e05b-4195-84f8-469d567de82f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.069336 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjcw9\" (UniqueName: \"kubernetes.io/projected/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-kube-api-access-qjcw9\") on node \"crc\" DevicePath \"\"" Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.069368 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.069396 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c26964fe-e05b-4195-84f8-469d567de82f-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.069405 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.069414 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a108e860-f849-47e3-b9d7-26fe735925db-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.069422 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vthv5\" (UniqueName: \"kubernetes.io/projected/c26964fe-e05b-4195-84f8-469d567de82f-kube-api-access-vthv5\") on node \"crc\" DevicePath \"\"" Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.069431 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnrx4\" (UniqueName: \"kubernetes.io/projected/a108e860-f849-47e3-b9d7-26fe735925db-kube-api-access-bnrx4\") on node \"crc\" DevicePath \"\"" Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.069440 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c26964fe-e05b-4195-84f8-469d567de82f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.144375 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fhwrg" Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.144399 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-fhwrg" event={"ID":"c26964fe-e05b-4195-84f8-469d567de82f","Type":"ContainerDied","Data":"4305f90957f93208ea421a281c29a93d51ff422627a9d6fcb351cf856891a0fc"} Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.145454 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4z7hf" event={"ID":"a108e860-f849-47e3-b9d7-26fe735925db","Type":"ContainerDied","Data":"a6b19cfb87566caceb8089ce6a4697f25c12eb396bb9e8c69b212dd712950bc9"} Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.145512 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4z7hf" Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.150492 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-b8tp9" event={"ID":"81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a","Type":"ContainerDied","Data":"d36fcb08f7db7aba147b8595a40fb92da89eddba2d5b7091e33d974cd3b3b5b8"} Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.151006 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-b8tp9" Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.223133 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fhwrg"] Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.226784 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fhwrg"] Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.268049 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-b8tp9"] Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.286429 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-b8tp9"] Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.299215 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4z7hf"] Feb 17 18:00:58 crc kubenswrapper[4680]: I0217 18:00:58.305257 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4z7hf"] Feb 17 18:00:59 crc kubenswrapper[4680]: I0217 18:00:59.115469 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a" path="/var/lib/kubelet/pods/81d2fe8b-9f0f-4ca2-b53b-39d5af9c5d6a/volumes" Feb 17 18:00:59 crc kubenswrapper[4680]: I0217 18:00:59.116452 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a108e860-f849-47e3-b9d7-26fe735925db" path="/var/lib/kubelet/pods/a108e860-f849-47e3-b9d7-26fe735925db/volumes" Feb 17 18:00:59 crc kubenswrapper[4680]: I0217 18:00:59.116927 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c26964fe-e05b-4195-84f8-469d567de82f" path="/var/lib/kubelet/pods/c26964fe-e05b-4195-84f8-469d567de82f/volumes" Feb 17 18:00:59 crc kubenswrapper[4680]: I0217 18:00:59.284693 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-h4f7r" Feb 17 18:00:59 crc kubenswrapper[4680]: I0217 18:00:59.391369 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5fd8\" (UniqueName: \"kubernetes.io/projected/49af63a1-f109-42f7-a96e-d1364799f6be-kube-api-access-p5fd8\") pod \"49af63a1-f109-42f7-a96e-d1364799f6be\" (UID: \"49af63a1-f109-42f7-a96e-d1364799f6be\") " Feb 17 18:00:59 crc kubenswrapper[4680]: I0217 18:00:59.391592 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49af63a1-f109-42f7-a96e-d1364799f6be-config\") pod \"49af63a1-f109-42f7-a96e-d1364799f6be\" (UID: \"49af63a1-f109-42f7-a96e-d1364799f6be\") " Feb 17 18:00:59 crc kubenswrapper[4680]: I0217 18:00:59.392037 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49af63a1-f109-42f7-a96e-d1364799f6be-config" (OuterVolumeSpecName: "config") pod "49af63a1-f109-42f7-a96e-d1364799f6be" (UID: "49af63a1-f109-42f7-a96e-d1364799f6be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:00:59 crc kubenswrapper[4680]: I0217 18:00:59.392446 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49af63a1-f109-42f7-a96e-d1364799f6be-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:00:59 crc kubenswrapper[4680]: I0217 18:00:59.402587 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49af63a1-f109-42f7-a96e-d1364799f6be-kube-api-access-p5fd8" (OuterVolumeSpecName: "kube-api-access-p5fd8") pod "49af63a1-f109-42f7-a96e-d1364799f6be" (UID: "49af63a1-f109-42f7-a96e-d1364799f6be"). InnerVolumeSpecName "kube-api-access-p5fd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:00:59 crc kubenswrapper[4680]: I0217 18:00:59.493910 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5fd8\" (UniqueName: \"kubernetes.io/projected/49af63a1-f109-42f7-a96e-d1364799f6be-kube-api-access-p5fd8\") on node \"crc\" DevicePath \"\"" Feb 17 18:00:59 crc kubenswrapper[4680]: I0217 18:00:59.619879 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-tw5w5"] Feb 17 18:00:59 crc kubenswrapper[4680]: I0217 18:00:59.714197 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bhpg9"] Feb 17 18:00:59 crc kubenswrapper[4680]: I0217 18:00:59.833744 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gbc6h"] Feb 17 18:00:59 crc kubenswrapper[4680]: W0217 18:00:59.930873 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2313798_4510_4dae_a8d1_d4f7c5c08627.slice/crio-ecef0f25b6cc5e3704e793f10992a6a439c10b2bbb78bc8bf23cd97152099fa8 WatchSource:0}: Error finding container ecef0f25b6cc5e3704e793f10992a6a439c10b2bbb78bc8bf23cd97152099fa8: Status 404 returned error can't find the container with id ecef0f25b6cc5e3704e793f10992a6a439c10b2bbb78bc8bf23cd97152099fa8 Feb 17 18:01:00 crc kubenswrapper[4680]: I0217 18:01:00.162899 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-h4f7r" event={"ID":"49af63a1-f109-42f7-a96e-d1364799f6be","Type":"ContainerDied","Data":"bc222d2ec7f3f42f11080a27c42acf58b0919decb8352def3d7af3d838f05b0a"} Feb 17 18:01:00 crc kubenswrapper[4680]: I0217 18:01:00.162975 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-h4f7r" Feb 17 18:01:00 crc kubenswrapper[4680]: I0217 18:01:00.167412 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tw5w5" event={"ID":"42bb122a-c2c2-4a70-a745-1a6bb9334698","Type":"ContainerStarted","Data":"1904230c73bc53d4ff3e6402604eb8b6a41687112e4deb4a617098e67c263585"} Feb 17 18:01:00 crc kubenswrapper[4680]: I0217 18:01:00.168557 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" event={"ID":"e2313798-4510-4dae-a8d1-d4f7c5c08627","Type":"ContainerStarted","Data":"ecef0f25b6cc5e3704e793f10992a6a439c10b2bbb78bc8bf23cd97152099fa8"} Feb 17 18:01:00 crc kubenswrapper[4680]: I0217 18:01:00.170035 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" event={"ID":"d3792713-546a-40e9-84a5-9b8c4b0bd4ba","Type":"ContainerStarted","Data":"6682c47ac30f0491c214611a2cc2023e8b75117534a38f6419a603c5ae027dc5"} Feb 17 18:01:00 crc kubenswrapper[4680]: I0217 18:01:00.226525 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h4f7r"] Feb 17 18:01:00 crc kubenswrapper[4680]: I0217 18:01:00.240633 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h4f7r"] Feb 17 18:01:00 crc kubenswrapper[4680]: E0217 18:01:00.307357 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 17 18:01:00 crc kubenswrapper[4680]: E0217 18:01:00.307418 4680 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 17 18:01:00 crc kubenswrapper[4680]: E0217 18:01:00.307558 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-grhpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(6b425c86-a83e-4258-8432-83308132d038): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 18:01:00 crc kubenswrapper[4680]: E0217 18:01:00.309182 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="6b425c86-a83e-4258-8432-83308132d038" Feb 17 18:01:01 crc kubenswrapper[4680]: I0217 18:01:01.134435 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49af63a1-f109-42f7-a96e-d1364799f6be" path="/var/lib/kubelet/pods/49af63a1-f109-42f7-a96e-d1364799f6be/volumes" Feb 17 18:01:01 crc kubenswrapper[4680]: I0217 18:01:01.182289 4680 generic.go:334] "Generic (PLEG): container finished" podID="d3792713-546a-40e9-84a5-9b8c4b0bd4ba" containerID="f31ad2240644510d5a3e2df224f29b361bce7ba8ee8b19f662b668be0f3737b7" exitCode=0 Feb 17 18:01:01 crc kubenswrapper[4680]: I0217 18:01:01.183922 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" event={"ID":"d3792713-546a-40e9-84a5-9b8c4b0bd4ba","Type":"ContainerDied","Data":"f31ad2240644510d5a3e2df224f29b361bce7ba8ee8b19f662b668be0f3737b7"} Feb 17 18:01:01 crc kubenswrapper[4680]: I0217 18:01:01.185969 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6ba0d348-8721-4d4f-9904-994b1ae56423","Type":"ContainerStarted","Data":"97d3bf88c39a4cc0aa17be8999818b466fee0d854bd6f68d3f99e95a7038c081"} Feb 17 18:01:01 crc kubenswrapper[4680]: I0217 18:01:01.187529 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7dz9r" event={"ID":"2ce593ef-6a97-4361-af72-7637711c07b5","Type":"ContainerStarted","Data":"422aac9d907cf072965f6e5a51f83c12dc8884f8a4199f1c3c64bee8d4c39de0"} Feb 17 18:01:01 crc kubenswrapper[4680]: I0217 18:01:01.188492 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-7dz9r" Feb 17 18:01:01 crc kubenswrapper[4680]: I0217 18:01:01.190908 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e083ffca-d900-4b1a-bc3a-eacae2a81902","Type":"ContainerStarted","Data":"8a7d3d7234198657e9b862ff08b7e61e940963f14d6d7c62fda78a96144485c8"} Feb 17 18:01:01 crc kubenswrapper[4680]: I0217 18:01:01.196013 4680 generic.go:334] "Generic (PLEG): container finished" podID="e2313798-4510-4dae-a8d1-d4f7c5c08627" containerID="bf2175abc2d873532fae1e30ec8eccf35d37d5befb6053e66e68765be1f04080" exitCode=0 Feb 17 18:01:01 crc kubenswrapper[4680]: I0217 18:01:01.196228 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" event={"ID":"e2313798-4510-4dae-a8d1-d4f7c5c08627","Type":"ContainerDied","Data":"bf2175abc2d873532fae1e30ec8eccf35d37d5befb6053e66e68765be1f04080"} Feb 17 18:01:01 crc kubenswrapper[4680]: I0217 18:01:01.198634 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-grk2z" event={"ID":"8f069d89-4464-4526-ba2c-5d4a9de13e02","Type":"ContainerStarted","Data":"6406119e60e42acf84b71d4f7b96f94136f91a1adc70aca66caec8dc67193b5d"} Feb 17 18:01:01 crc kubenswrapper[4680]: E0217 18:01:01.199530 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="6b425c86-a83e-4258-8432-83308132d038" Feb 17 18:01:01 crc kubenswrapper[4680]: I0217 18:01:01.220801 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7dz9r" podStartSLOduration=3.230724647 podStartE2EDuration="32.220775029s" podCreationTimestamp="2026-02-17 18:00:29 +0000 UTC" firstStartedPulling="2026-02-17 18:00:30.423376954 +0000 UTC m=+959.974275633" lastFinishedPulling="2026-02-17 18:00:59.413427336 +0000 UTC m=+988.964326015" observedRunningTime="2026-02-17 18:01:01.212478748 +0000 UTC m=+990.763377427" watchObservedRunningTime="2026-02-17 18:01:01.220775029 +0000 UTC m=+990.771673708" Feb 17 18:01:02 crc kubenswrapper[4680]: I0217 18:01:02.207976 4680 generic.go:334] "Generic (PLEG): container finished" podID="8f069d89-4464-4526-ba2c-5d4a9de13e02" containerID="6406119e60e42acf84b71d4f7b96f94136f91a1adc70aca66caec8dc67193b5d" exitCode=0 Feb 17 18:01:02 crc kubenswrapper[4680]: I0217 18:01:02.208060 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-grk2z" event={"ID":"8f069d89-4464-4526-ba2c-5d4a9de13e02","Type":"ContainerDied","Data":"6406119e60e42acf84b71d4f7b96f94136f91a1adc70aca66caec8dc67193b5d"} Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.224447 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" event={"ID":"e2313798-4510-4dae-a8d1-d4f7c5c08627","Type":"ContainerStarted","Data":"3050b61eeb673ac5a7ba39eab6b9217502f53cb7e809bf5bd108a588ac24c9ed"} Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.225881 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.228060 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-grk2z" event={"ID":"8f069d89-4464-4526-ba2c-5d4a9de13e02","Type":"ContainerStarted","Data":"f0476fb8064a1c795d9d8d4f9580bea748b2c6015893a07bc07cc03171b2cc72"} Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.228093 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-grk2z" event={"ID":"8f069d89-4464-4526-ba2c-5d4a9de13e02","Type":"ContainerStarted","Data":"44d29946767ef19fc5d0fe84ab639582c8f0daeaddabe3f1b5ab9e0856f99fef"} Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.229259 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.229390 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.233017 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" event={"ID":"d3792713-546a-40e9-84a5-9b8c4b0bd4ba","Type":"ContainerStarted","Data":"ec066d55e37d7ebaa0279d20790e17d957d281206bfeb2c923c739fe708538b3"} Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.233682 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.235015 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"81eeaf28-d5ac-44e3-bad2-215fc89f4895","Type":"ContainerStarted","Data":"a92f4b4af5c0b60983ad5e0d0a6a5a254d4b153918b5ee06eb42a9e9354726b2"} Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.237232 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tw5w5" event={"ID":"42bb122a-c2c2-4a70-a745-1a6bb9334698","Type":"ContainerStarted","Data":"ad052ed092fc387eb8e5e6ad7b8573b8c9d29097dfb8fee45ddde17ef6731906"} Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.239462 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6ba0d348-8721-4d4f-9904-994b1ae56423","Type":"ContainerStarted","Data":"2fc4c7f93e06c50f12331594d4c456976fd948f49c3a5f5999ba800e6053b8e4"} Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.248128 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" podStartSLOduration=9.782946098 podStartE2EDuration="10.248110595s" podCreationTimestamp="2026-02-17 18:00:53 +0000 UTC" firstStartedPulling="2026-02-17 18:00:59.933309529 +0000 UTC m=+989.484208208" lastFinishedPulling="2026-02-17 18:01:00.398474016 +0000 UTC m=+989.949372705" observedRunningTime="2026-02-17 18:01:03.24508059 +0000 UTC m=+992.795979259" watchObservedRunningTime="2026-02-17 18:01:03.248110595 +0000 UTC m=+992.799009274" Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.248942 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e083ffca-d900-4b1a-bc3a-eacae2a81902","Type":"ContainerStarted","Data":"78c247dcf26aedf1cfe3ab87c9a869469f525af651ee27e8d3c342f545f40051"} Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.272374 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-grk2z" podStartSLOduration=12.787206302 podStartE2EDuration="34.272304765s" podCreationTimestamp="2026-02-17 18:00:29 +0000 UTC" firstStartedPulling="2026-02-17 18:00:37.724121075 +0000 UTC m=+967.275019754" lastFinishedPulling="2026-02-17 18:00:59.209219538 +0000 UTC m=+988.760118217" observedRunningTime="2026-02-17 18:01:03.267149722 +0000 UTC m=+992.818048421" watchObservedRunningTime="2026-02-17 18:01:03.272304765 +0000 UTC m=+992.823203444" Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.314148 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.319875 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" podStartSLOduration=10.706834591 podStartE2EDuration="11.319857327s" podCreationTimestamp="2026-02-17 18:00:52 +0000 UTC" firstStartedPulling="2026-02-17 18:00:59.765765672 +0000 UTC m=+989.316664351" lastFinishedPulling="2026-02-17 18:01:00.378788408 +0000 UTC m=+989.929687087" observedRunningTime="2026-02-17 18:01:03.310527244 +0000 UTC m=+992.861425933" watchObservedRunningTime="2026-02-17 18:01:03.319857327 +0000 UTC m=+992.870756006" Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.345299 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=10.988628046 podStartE2EDuration="35.345264164s" podCreationTimestamp="2026-02-17 18:00:28 +0000 UTC" firstStartedPulling="2026-02-17 18:00:37.740291773 +0000 UTC m=+967.291190452" lastFinishedPulling="2026-02-17 18:01:02.096927891 +0000 UTC m=+991.647826570" observedRunningTime="2026-02-17 18:01:03.344266942 +0000 UTC m=+992.895165651" watchObservedRunningTime="2026-02-17 18:01:03.345264164 +0000 UTC m=+992.896162843" Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.370130 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-tw5w5" podStartSLOduration=9.049329239 podStartE2EDuration="11.370095603s" podCreationTimestamp="2026-02-17 18:00:52 +0000 UTC" firstStartedPulling="2026-02-17 18:00:59.763283984 +0000 UTC m=+989.314182663" lastFinishedPulling="2026-02-17 18:01:02.084050348 +0000 UTC m=+991.634949027" observedRunningTime="2026-02-17 18:01:03.36171194 +0000 UTC m=+992.912610619" watchObservedRunningTime="2026-02-17 18:01:03.370095603 +0000 UTC m=+992.920994282" Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.378081 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 17 18:01:03 crc kubenswrapper[4680]: I0217 18:01:03.391615 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.042761423 podStartE2EDuration="32.391588347s" podCreationTimestamp="2026-02-17 18:00:31 +0000 UTC" firstStartedPulling="2026-02-17 18:00:37.740241331 +0000 UTC m=+967.291140010" lastFinishedPulling="2026-02-17 18:01:02.089068265 +0000 UTC m=+991.639966934" observedRunningTime="2026-02-17 18:01:03.387809148 +0000 UTC m=+992.938707847" watchObservedRunningTime="2026-02-17 18:01:03.391588347 +0000 UTC m=+992.942487026" Feb 17 18:01:04 crc kubenswrapper[4680]: I0217 18:01:04.256233 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 17 18:01:05 crc kubenswrapper[4680]: I0217 18:01:05.264155 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d9f1e89-7561-458e-9189-73106b757079","Type":"ContainerStarted","Data":"bbec7f40f1f0a58d80579676478398525fbc7efbaaf6ef30a0bd41d67e9482c8"} Feb 17 18:01:05 crc kubenswrapper[4680]: I0217 18:01:05.311445 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.164095 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.200286 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.271662 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.306927 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.499177 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.500754 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.503399 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.503623 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.506292 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.506733 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-zxx8p" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.508811 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.615015 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffab513d-4164-4043-a65b-b72c0b1a67a4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.615123 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ffab513d-4164-4043-a65b-b72c0b1a67a4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.615309 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffab513d-4164-4043-a65b-b72c0b1a67a4-config\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.615344 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffab513d-4164-4043-a65b-b72c0b1a67a4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.615380 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffab513d-4164-4043-a65b-b72c0b1a67a4-scripts\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.615422 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffab513d-4164-4043-a65b-b72c0b1a67a4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.615448 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvpqp\" (UniqueName: \"kubernetes.io/projected/ffab513d-4164-4043-a65b-b72c0b1a67a4-kube-api-access-nvpqp\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.717252 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffab513d-4164-4043-a65b-b72c0b1a67a4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.717655 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ffab513d-4164-4043-a65b-b72c0b1a67a4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.717680 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffab513d-4164-4043-a65b-b72c0b1a67a4-config\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.717697 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffab513d-4164-4043-a65b-b72c0b1a67a4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.717723 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffab513d-4164-4043-a65b-b72c0b1a67a4-scripts\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.717753 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffab513d-4164-4043-a65b-b72c0b1a67a4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.717776 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvpqp\" (UniqueName: \"kubernetes.io/projected/ffab513d-4164-4043-a65b-b72c0b1a67a4-kube-api-access-nvpqp\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.718493 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ffab513d-4164-4043-a65b-b72c0b1a67a4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.719141 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffab513d-4164-4043-a65b-b72c0b1a67a4-scripts\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.720067 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffab513d-4164-4043-a65b-b72c0b1a67a4-config\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.725116 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffab513d-4164-4043-a65b-b72c0b1a67a4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.731925 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffab513d-4164-4043-a65b-b72c0b1a67a4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.736362 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvpqp\" (UniqueName: \"kubernetes.io/projected/ffab513d-4164-4043-a65b-b72c0b1a67a4-kube-api-access-nvpqp\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.736716 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffab513d-4164-4043-a65b-b72c0b1a67a4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffab513d-4164-4043-a65b-b72c0b1a67a4\") " pod="openstack/ovn-northd-0" Feb 17 18:01:06 crc kubenswrapper[4680]: I0217 18:01:06.825156 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 18:01:07 crc kubenswrapper[4680]: I0217 18:01:07.259207 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 18:01:07 crc kubenswrapper[4680]: W0217 18:01:07.269521 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffab513d_4164_4043_a65b_b72c0b1a67a4.slice/crio-97c46128eacb1b2e53801a185da11cde1a47aa0d5ca1b96df57501bdb740ec72 WatchSource:0}: Error finding container 97c46128eacb1b2e53801a185da11cde1a47aa0d5ca1b96df57501bdb740ec72: Status 404 returned error can't find the container with id 97c46128eacb1b2e53801a185da11cde1a47aa0d5ca1b96df57501bdb740ec72 Feb 17 18:01:07 crc kubenswrapper[4680]: I0217 18:01:07.290377 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"391eec5a-7bea-47ed-963a-8152e1db2357","Type":"ContainerStarted","Data":"565ca652e6e5fd067ec2d8a318ad05d2273436d74edc83a977b2e7fba99c689b"} Feb 17 18:01:07 crc kubenswrapper[4680]: I0217 18:01:07.309073 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.469544918 podStartE2EDuration="44.309051884s" podCreationTimestamp="2026-02-17 18:00:23 +0000 UTC" firstStartedPulling="2026-02-17 18:00:24.738409356 +0000 UTC m=+954.289308035" lastFinishedPulling="2026-02-17 18:01:06.577916322 +0000 UTC m=+996.128815001" observedRunningTime="2026-02-17 18:01:07.304615515 +0000 UTC m=+996.855514194" watchObservedRunningTime="2026-02-17 18:01:07.309051884 +0000 UTC m=+996.859950563" Feb 17 18:01:08 crc kubenswrapper[4680]: I0217 18:01:08.299537 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ffab513d-4164-4043-a65b-b72c0b1a67a4","Type":"ContainerStarted","Data":"97c46128eacb1b2e53801a185da11cde1a47aa0d5ca1b96df57501bdb740ec72"} Feb 17 18:01:08 crc kubenswrapper[4680]: I0217 18:01:08.317808 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:01:08 crc kubenswrapper[4680]: I0217 18:01:08.645701 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 17 18:01:08 crc kubenswrapper[4680]: I0217 18:01:08.668452 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:01:08 crc kubenswrapper[4680]: I0217 18:01:08.712577 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bhpg9"] Feb 17 18:01:09 crc kubenswrapper[4680]: I0217 18:01:09.306949 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ffab513d-4164-4043-a65b-b72c0b1a67a4","Type":"ContainerStarted","Data":"a611e3b6f417f2bf47fa53af7efbd5953e53609e08f356619953551a208b87ca"} Feb 17 18:01:09 crc kubenswrapper[4680]: I0217 18:01:09.307238 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ffab513d-4164-4043-a65b-b72c0b1a67a4","Type":"ContainerStarted","Data":"6f3e8b9db23bd92f9a76733253d3bb7e8a7a9d6fdaf04cef3b898f82524357df"} Feb 17 18:01:09 crc kubenswrapper[4680]: I0217 18:01:09.307255 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 17 18:01:09 crc kubenswrapper[4680]: I0217 18:01:09.310087 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92bcbcf3-8caf-4493-99ac-478e3d55a1c1","Type":"ContainerStarted","Data":"9e50fad7237ac3ede25739fb154b323a1c841dbf3a17d84e30f0cda3d9edf31f"} Feb 17 18:01:09 crc kubenswrapper[4680]: I0217 18:01:09.310187 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" podUID="d3792713-546a-40e9-84a5-9b8c4b0bd4ba" containerName="dnsmasq-dns" containerID="cri-o://ec066d55e37d7ebaa0279d20790e17d957d281206bfeb2c923c739fe708538b3" gracePeriod=10 Feb 17 18:01:09 crc kubenswrapper[4680]: I0217 18:01:09.334391 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.470806488 podStartE2EDuration="3.334367806s" podCreationTimestamp="2026-02-17 18:01:06 +0000 UTC" firstStartedPulling="2026-02-17 18:01:07.277427562 +0000 UTC m=+996.828326251" lastFinishedPulling="2026-02-17 18:01:08.14098889 +0000 UTC m=+997.691887569" observedRunningTime="2026-02-17 18:01:09.32743928 +0000 UTC m=+998.878337979" watchObservedRunningTime="2026-02-17 18:01:09.334367806 +0000 UTC m=+998.885266475" Feb 17 18:01:09 crc kubenswrapper[4680]: I0217 18:01:09.895417 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:01:09 crc kubenswrapper[4680]: I0217 18:01:09.976996 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-dns-svc\") pod \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\" (UID: \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\") " Feb 17 18:01:09 crc kubenswrapper[4680]: I0217 18:01:09.977086 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-ovsdbserver-nb\") pod \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\" (UID: \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\") " Feb 17 18:01:09 crc kubenswrapper[4680]: I0217 18:01:09.977265 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-config\") pod \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\" (UID: \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\") " Feb 17 18:01:09 crc kubenswrapper[4680]: I0217 18:01:09.977429 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r94cn\" (UniqueName: \"kubernetes.io/projected/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-kube-api-access-r94cn\") pod \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\" (UID: \"d3792713-546a-40e9-84a5-9b8c4b0bd4ba\") " Feb 17 18:01:09 crc kubenswrapper[4680]: I0217 18:01:09.993778 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-kube-api-access-r94cn" (OuterVolumeSpecName: "kube-api-access-r94cn") pod "d3792713-546a-40e9-84a5-9b8c4b0bd4ba" (UID: "d3792713-546a-40e9-84a5-9b8c4b0bd4ba"). InnerVolumeSpecName "kube-api-access-r94cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:10 crc kubenswrapper[4680]: I0217 18:01:10.023382 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3792713-546a-40e9-84a5-9b8c4b0bd4ba" (UID: "d3792713-546a-40e9-84a5-9b8c4b0bd4ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:10 crc kubenswrapper[4680]: I0217 18:01:10.031068 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-config" (OuterVolumeSpecName: "config") pod "d3792713-546a-40e9-84a5-9b8c4b0bd4ba" (UID: "d3792713-546a-40e9-84a5-9b8c4b0bd4ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:10 crc kubenswrapper[4680]: I0217 18:01:10.040472 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d3792713-546a-40e9-84a5-9b8c4b0bd4ba" (UID: "d3792713-546a-40e9-84a5-9b8c4b0bd4ba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:10 crc kubenswrapper[4680]: I0217 18:01:10.079452 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:10 crc kubenswrapper[4680]: I0217 18:01:10.079657 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:10 crc kubenswrapper[4680]: I0217 18:01:10.079752 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:10 crc kubenswrapper[4680]: I0217 18:01:10.079859 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r94cn\" (UniqueName: \"kubernetes.io/projected/d3792713-546a-40e9-84a5-9b8c4b0bd4ba-kube-api-access-r94cn\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:10 crc kubenswrapper[4680]: I0217 18:01:10.319992 4680 generic.go:334] "Generic (PLEG): container finished" podID="d3792713-546a-40e9-84a5-9b8c4b0bd4ba" containerID="ec066d55e37d7ebaa0279d20790e17d957d281206bfeb2c923c739fe708538b3" exitCode=0 Feb 17 18:01:10 crc kubenswrapper[4680]: I0217 18:01:10.320141 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" Feb 17 18:01:10 crc kubenswrapper[4680]: I0217 18:01:10.320188 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" event={"ID":"d3792713-546a-40e9-84a5-9b8c4b0bd4ba","Type":"ContainerDied","Data":"ec066d55e37d7ebaa0279d20790e17d957d281206bfeb2c923c739fe708538b3"} Feb 17 18:01:10 crc kubenswrapper[4680]: I0217 18:01:10.320220 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bhpg9" event={"ID":"d3792713-546a-40e9-84a5-9b8c4b0bd4ba","Type":"ContainerDied","Data":"6682c47ac30f0491c214611a2cc2023e8b75117534a38f6419a603c5ae027dc5"} Feb 17 18:01:10 crc kubenswrapper[4680]: I0217 18:01:10.320239 4680 scope.go:117] "RemoveContainer" containerID="ec066d55e37d7ebaa0279d20790e17d957d281206bfeb2c923c739fe708538b3" Feb 17 18:01:10 crc kubenswrapper[4680]: I0217 18:01:10.353718 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bhpg9"] Feb 17 18:01:10 crc kubenswrapper[4680]: I0217 18:01:10.362852 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bhpg9"] Feb 17 18:01:11 crc kubenswrapper[4680]: I0217 18:01:11.115295 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3792713-546a-40e9-84a5-9b8c4b0bd4ba" path="/var/lib/kubelet/pods/d3792713-546a-40e9-84a5-9b8c4b0bd4ba/volumes" Feb 17 18:01:12 crc kubenswrapper[4680]: I0217 18:01:12.134865 4680 scope.go:117] "RemoveContainer" containerID="f31ad2240644510d5a3e2df224f29b361bce7ba8ee8b19f662b668be0f3737b7" Feb 17 18:01:12 crc kubenswrapper[4680]: I0217 18:01:12.155135 4680 scope.go:117] "RemoveContainer" containerID="ec066d55e37d7ebaa0279d20790e17d957d281206bfeb2c923c739fe708538b3" Feb 17 18:01:12 crc kubenswrapper[4680]: E0217 18:01:12.155778 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec066d55e37d7ebaa0279d20790e17d957d281206bfeb2c923c739fe708538b3\": container with ID starting with ec066d55e37d7ebaa0279d20790e17d957d281206bfeb2c923c739fe708538b3 not found: ID does not exist" containerID="ec066d55e37d7ebaa0279d20790e17d957d281206bfeb2c923c739fe708538b3" Feb 17 18:01:12 crc kubenswrapper[4680]: I0217 18:01:12.155817 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec066d55e37d7ebaa0279d20790e17d957d281206bfeb2c923c739fe708538b3"} err="failed to get container status \"ec066d55e37d7ebaa0279d20790e17d957d281206bfeb2c923c739fe708538b3\": rpc error: code = NotFound desc = could not find container \"ec066d55e37d7ebaa0279d20790e17d957d281206bfeb2c923c739fe708538b3\": container with ID starting with ec066d55e37d7ebaa0279d20790e17d957d281206bfeb2c923c739fe708538b3 not found: ID does not exist" Feb 17 18:01:12 crc kubenswrapper[4680]: I0217 18:01:12.155838 4680 scope.go:117] "RemoveContainer" containerID="f31ad2240644510d5a3e2df224f29b361bce7ba8ee8b19f662b668be0f3737b7" Feb 17 18:01:12 crc kubenswrapper[4680]: E0217 18:01:12.156277 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31ad2240644510d5a3e2df224f29b361bce7ba8ee8b19f662b668be0f3737b7\": container with ID starting with f31ad2240644510d5a3e2df224f29b361bce7ba8ee8b19f662b668be0f3737b7 not found: ID does not exist" containerID="f31ad2240644510d5a3e2df224f29b361bce7ba8ee8b19f662b668be0f3737b7" Feb 17 18:01:12 crc kubenswrapper[4680]: I0217 18:01:12.156306 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31ad2240644510d5a3e2df224f29b361bce7ba8ee8b19f662b668be0f3737b7"} err="failed to get container status \"f31ad2240644510d5a3e2df224f29b361bce7ba8ee8b19f662b668be0f3737b7\": rpc error: code = NotFound desc = could not find container \"f31ad2240644510d5a3e2df224f29b361bce7ba8ee8b19f662b668be0f3737b7\": container with ID starting with f31ad2240644510d5a3e2df224f29b361bce7ba8ee8b19f662b668be0f3737b7 not found: ID does not exist" Feb 17 18:01:13 crc kubenswrapper[4680]: I0217 18:01:13.343199 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"408a63fd-0033-4bf6-925a-cc82a63b2a60","Type":"ContainerStarted","Data":"6642f0686145be26db66d5e48c5955f937a14892b12592b0928da8256b9b97ca"} Feb 17 18:01:13 crc kubenswrapper[4680]: I0217 18:01:13.645748 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 17 18:01:14 crc kubenswrapper[4680]: I0217 18:01:14.353265 4680 generic.go:334] "Generic (PLEG): container finished" podID="92bcbcf3-8caf-4493-99ac-478e3d55a1c1" containerID="9e50fad7237ac3ede25739fb154b323a1c841dbf3a17d84e30f0cda3d9edf31f" exitCode=0 Feb 17 18:01:14 crc kubenswrapper[4680]: I0217 18:01:14.353578 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92bcbcf3-8caf-4493-99ac-478e3d55a1c1","Type":"ContainerDied","Data":"9e50fad7237ac3ede25739fb154b323a1c841dbf3a17d84e30f0cda3d9edf31f"} Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.361738 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92bcbcf3-8caf-4493-99ac-478e3d55a1c1","Type":"ContainerStarted","Data":"13a2c203dfa15677829788538806d91519edce95169c2540645ad9c07a2f749d"} Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.388102 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.05448653 podStartE2EDuration="55.388081487s" podCreationTimestamp="2026-02-17 18:00:20 +0000 UTC" firstStartedPulling="2026-02-17 18:00:22.483699665 +0000 UTC m=+952.034598344" lastFinishedPulling="2026-02-17 18:01:08.817294622 +0000 UTC m=+998.368193301" observedRunningTime="2026-02-17 18:01:15.381941624 +0000 UTC m=+1004.932840303" watchObservedRunningTime="2026-02-17 18:01:15.388081487 +0000 UTC m=+1004.938980186" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.772895 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-nm76h"] Feb 17 18:01:15 crc kubenswrapper[4680]: E0217 18:01:15.773225 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3792713-546a-40e9-84a5-9b8c4b0bd4ba" containerName="dnsmasq-dns" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.773252 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3792713-546a-40e9-84a5-9b8c4b0bd4ba" containerName="dnsmasq-dns" Feb 17 18:01:15 crc kubenswrapper[4680]: E0217 18:01:15.773279 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3792713-546a-40e9-84a5-9b8c4b0bd4ba" containerName="init" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.773288 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3792713-546a-40e9-84a5-9b8c4b0bd4ba" containerName="init" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.773543 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3792713-546a-40e9-84a5-9b8c4b0bd4ba" containerName="dnsmasq-dns" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.774343 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.790159 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nm76h"] Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.876214 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nm76h\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.876559 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-config\") pod \"dnsmasq-dns-698758b865-nm76h\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.876579 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-dns-svc\") pod \"dnsmasq-dns-698758b865-nm76h\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.876637 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj6kf\" (UniqueName: \"kubernetes.io/projected/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-kube-api-access-jj6kf\") pod \"dnsmasq-dns-698758b865-nm76h\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.876666 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nm76h\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.978102 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nm76h\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.978180 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-config\") pod \"dnsmasq-dns-698758b865-nm76h\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.978198 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-dns-svc\") pod \"dnsmasq-dns-698758b865-nm76h\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.978263 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj6kf\" (UniqueName: \"kubernetes.io/projected/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-kube-api-access-jj6kf\") pod \"dnsmasq-dns-698758b865-nm76h\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.978288 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nm76h\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.979093 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nm76h\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.979612 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nm76h\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.980087 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-config\") pod \"dnsmasq-dns-698758b865-nm76h\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:15 crc kubenswrapper[4680]: I0217 18:01:15.980612 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-dns-svc\") pod \"dnsmasq-dns-698758b865-nm76h\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:16 crc kubenswrapper[4680]: I0217 18:01:16.009333 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj6kf\" (UniqueName: \"kubernetes.io/projected/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-kube-api-access-jj6kf\") pod \"dnsmasq-dns-698758b865-nm76h\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:16 crc kubenswrapper[4680]: I0217 18:01:16.099732 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:16 crc kubenswrapper[4680]: I0217 18:01:16.373104 4680 generic.go:334] "Generic (PLEG): container finished" podID="408a63fd-0033-4bf6-925a-cc82a63b2a60" containerID="6642f0686145be26db66d5e48c5955f937a14892b12592b0928da8256b9b97ca" exitCode=0 Feb 17 18:01:16 crc kubenswrapper[4680]: I0217 18:01:16.373204 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"408a63fd-0033-4bf6-925a-cc82a63b2a60","Type":"ContainerDied","Data":"6642f0686145be26db66d5e48c5955f937a14892b12592b0928da8256b9b97ca"} Feb 17 18:01:16 crc kubenswrapper[4680]: I0217 18:01:16.774265 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nm76h"] Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.018619 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.033950 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.036789 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.037401 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-d25nm" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.037496 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.037751 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.062453 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.100651 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.102474 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvqr7\" (UniqueName: \"kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-kube-api-access-qvqr7\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.102593 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-lock\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.102791 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-cache\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.102900 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.102961 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.204553 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-lock\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.204622 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-cache\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.204653 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.204669 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.204750 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.204775 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvqr7\" (UniqueName: \"kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-kube-api-access-qvqr7\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.205079 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.205111 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-cache\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.205090 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-lock\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: E0217 18:01:17.205493 4680 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 18:01:17 crc kubenswrapper[4680]: E0217 18:01:17.205517 4680 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 18:01:17 crc kubenswrapper[4680]: E0217 18:01:17.205575 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift podName:54c53390-eb8a-4a06-9d50-7d0a4f10d3df nodeName:}" failed. No retries permitted until 2026-02-17 18:01:17.705554977 +0000 UTC m=+1007.256453656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift") pod "swift-storage-0" (UID: "54c53390-eb8a-4a06-9d50-7d0a4f10d3df") : configmap "swift-ring-files" not found Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.210222 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.228237 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.229031 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvqr7\" (UniqueName: \"kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-kube-api-access-qvqr7\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.381130 4680 generic.go:334] "Generic (PLEG): container finished" podID="644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" containerID="33e4c088df5ba02ae56df5ad2e9a52f187d5ea04cbfd87b48896c6ede700d437" exitCode=0 Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.381190 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nm76h" event={"ID":"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4","Type":"ContainerDied","Data":"33e4c088df5ba02ae56df5ad2e9a52f187d5ea04cbfd87b48896c6ede700d437"} Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.381217 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nm76h" event={"ID":"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4","Type":"ContainerStarted","Data":"412f948aac5506d5dc69c8e28a19e643b44f689470e7d64903a86ba67435f3c2"} Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.394384 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b425c86-a83e-4258-8432-83308132d038","Type":"ContainerStarted","Data":"c3bcd130e4f051598ae9e1882af878bb4158e0d7273e2021a622635c0f4b98f6"} Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.395329 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.398713 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"408a63fd-0033-4bf6-925a-cc82a63b2a60","Type":"ContainerStarted","Data":"febb3cb4d6e12d12fc2fe717dca1c9c03017a1439e4eba4df50298cf13efca6d"} Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.441857 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371980.412941 podStartE2EDuration="56.441835371s" podCreationTimestamp="2026-02-17 18:00:21 +0000 UTC" firstStartedPulling="2026-02-17 18:00:23.987462181 +0000 UTC m=+953.538360860" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:01:17.43670323 +0000 UTC m=+1006.987601909" watchObservedRunningTime="2026-02-17 18:01:17.441835371 +0000 UTC m=+1006.992734050" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.453071 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.927786955 podStartE2EDuration="52.453019732s" podCreationTimestamp="2026-02-17 18:00:25 +0000 UTC" firstStartedPulling="2026-02-17 18:00:27.016348975 +0000 UTC m=+956.567247654" lastFinishedPulling="2026-02-17 18:01:16.541581752 +0000 UTC m=+1006.092480431" observedRunningTime="2026-02-17 18:01:17.452768334 +0000 UTC m=+1007.003667013" watchObservedRunningTime="2026-02-17 18:01:17.453019732 +0000 UTC m=+1007.003918411" Feb 17 18:01:17 crc kubenswrapper[4680]: I0217 18:01:17.713520 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:17 crc kubenswrapper[4680]: E0217 18:01:17.713759 4680 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 18:01:17 crc kubenswrapper[4680]: E0217 18:01:17.713794 4680 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 18:01:17 crc kubenswrapper[4680]: E0217 18:01:17.713853 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift podName:54c53390-eb8a-4a06-9d50-7d0a4f10d3df nodeName:}" failed. No retries permitted until 2026-02-17 18:01:18.713833416 +0000 UTC m=+1008.264732095 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift") pod "swift-storage-0" (UID: "54c53390-eb8a-4a06-9d50-7d0a4f10d3df") : configmap "swift-ring-files" not found Feb 17 18:01:18 crc kubenswrapper[4680]: I0217 18:01:18.409619 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nm76h" event={"ID":"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4","Type":"ContainerStarted","Data":"24519869843f5e829b64e086c99cd62d1b0fb0ace031148021a74c4e616b832d"} Feb 17 18:01:18 crc kubenswrapper[4680]: I0217 18:01:18.731247 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:18 crc kubenswrapper[4680]: E0217 18:01:18.731467 4680 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 18:01:18 crc kubenswrapper[4680]: E0217 18:01:18.731498 4680 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 18:01:18 crc kubenswrapper[4680]: E0217 18:01:18.731561 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift podName:54c53390-eb8a-4a06-9d50-7d0a4f10d3df nodeName:}" failed. No retries permitted until 2026-02-17 18:01:20.731538301 +0000 UTC m=+1010.282436980 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift") pod "swift-storage-0" (UID: "54c53390-eb8a-4a06-9d50-7d0a4f10d3df") : configmap "swift-ring-files" not found Feb 17 18:01:19 crc kubenswrapper[4680]: I0217 18:01:19.415889 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.764492 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:20 crc kubenswrapper[4680]: E0217 18:01:20.764670 4680 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 18:01:20 crc kubenswrapper[4680]: E0217 18:01:20.764690 4680 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 18:01:20 crc kubenswrapper[4680]: E0217 18:01:20.764741 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift podName:54c53390-eb8a-4a06-9d50-7d0a4f10d3df nodeName:}" failed. No retries permitted until 2026-02-17 18:01:24.7647243 +0000 UTC m=+1014.315622979 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift") pod "swift-storage-0" (UID: "54c53390-eb8a-4a06-9d50-7d0a4f10d3df") : configmap "swift-ring-files" not found Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.813798 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-nm76h" podStartSLOduration=5.81377343 podStartE2EDuration="5.81377343s" podCreationTimestamp="2026-02-17 18:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:01:18.490597661 +0000 UTC m=+1008.041496340" watchObservedRunningTime="2026-02-17 18:01:20.81377343 +0000 UTC m=+1010.364672109" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.819858 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-b9q8k"] Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.821163 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.823248 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.823423 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.823594 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.834880 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-b9q8k"] Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.866227 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-swiftconf\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.866426 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e281ffae-b948-40c0-b8a1-68805d081000-ring-data-devices\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.866564 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-combined-ca-bundle\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.866606 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj6pr\" (UniqueName: \"kubernetes.io/projected/e281ffae-b948-40c0-b8a1-68805d081000-kube-api-access-dj6pr\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.866643 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e281ffae-b948-40c0-b8a1-68805d081000-scripts\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.866675 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e281ffae-b948-40c0-b8a1-68805d081000-etc-swift\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.866728 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-dispersionconf\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.968150 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-swiftconf\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.968469 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e281ffae-b948-40c0-b8a1-68805d081000-ring-data-devices\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.968598 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-combined-ca-bundle\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.968680 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj6pr\" (UniqueName: \"kubernetes.io/projected/e281ffae-b948-40c0-b8a1-68805d081000-kube-api-access-dj6pr\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.968768 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e281ffae-b948-40c0-b8a1-68805d081000-scripts\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.968882 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e281ffae-b948-40c0-b8a1-68805d081000-etc-swift\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.969242 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e281ffae-b948-40c0-b8a1-68805d081000-ring-data-devices\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.969219 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e281ffae-b948-40c0-b8a1-68805d081000-etc-swift\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.969428 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-dispersionconf\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.969483 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e281ffae-b948-40c0-b8a1-68805d081000-scripts\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.973647 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-swiftconf\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.974612 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-combined-ca-bundle\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:20 crc kubenswrapper[4680]: I0217 18:01:20.979799 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-dispersionconf\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:21 crc kubenswrapper[4680]: I0217 18:01:21.005852 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj6pr\" (UniqueName: \"kubernetes.io/projected/e281ffae-b948-40c0-b8a1-68805d081000-kube-api-access-dj6pr\") pod \"swift-ring-rebalance-b9q8k\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:21 crc kubenswrapper[4680]: I0217 18:01:21.144007 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:21 crc kubenswrapper[4680]: I0217 18:01:21.611228 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-b9q8k"] Feb 17 18:01:21 crc kubenswrapper[4680]: I0217 18:01:21.682930 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 17 18:01:21 crc kubenswrapper[4680]: I0217 18:01:21.682990 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 17 18:01:21 crc kubenswrapper[4680]: I0217 18:01:21.747018 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 17 18:01:22 crc kubenswrapper[4680]: I0217 18:01:22.438566 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-b9q8k" event={"ID":"e281ffae-b948-40c0-b8a1-68805d081000","Type":"ContainerStarted","Data":"8efc7a323002c580bf788726e365ea9aa4e0b3cd7f4e09a7ac6c94b06b849817"} Feb 17 18:01:22 crc kubenswrapper[4680]: I0217 18:01:22.510008 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 17 18:01:23 crc kubenswrapper[4680]: I0217 18:01:23.195838 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 17 18:01:23 crc kubenswrapper[4680]: I0217 18:01:23.196196 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 17 18:01:23 crc kubenswrapper[4680]: I0217 18:01:23.282591 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 17 18:01:23 crc kubenswrapper[4680]: I0217 18:01:23.522734 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 17 18:01:23 crc kubenswrapper[4680]: I0217 18:01:23.884687 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-fdc1-account-create-update-zp4v9"] Feb 17 18:01:23 crc kubenswrapper[4680]: I0217 18:01:23.885756 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fdc1-account-create-update-zp4v9" Feb 17 18:01:23 crc kubenswrapper[4680]: I0217 18:01:23.892981 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-fdc1-account-create-update-zp4v9"] Feb 17 18:01:23 crc kubenswrapper[4680]: I0217 18:01:23.895474 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 17 18:01:23 crc kubenswrapper[4680]: I0217 18:01:23.941989 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-qjgvr"] Feb 17 18:01:23 crc kubenswrapper[4680]: I0217 18:01:23.943416 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qjgvr" Feb 17 18:01:23 crc kubenswrapper[4680]: I0217 18:01:23.948957 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qjgvr"] Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.023155 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24pw5\" (UniqueName: \"kubernetes.io/projected/9da1b635-d49b-4f80-9e40-a70d5f899087-kube-api-access-24pw5\") pod \"glance-fdc1-account-create-update-zp4v9\" (UID: \"9da1b635-d49b-4f80-9e40-a70d5f899087\") " pod="openstack/glance-fdc1-account-create-update-zp4v9" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.023347 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9da1b635-d49b-4f80-9e40-a70d5f899087-operator-scripts\") pod \"glance-fdc1-account-create-update-zp4v9\" (UID: \"9da1b635-d49b-4f80-9e40-a70d5f899087\") " pod="openstack/glance-fdc1-account-create-update-zp4v9" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.124814 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e869c352-adf6-4ae6-9a3e-9d35179d50a0-operator-scripts\") pod \"glance-db-create-qjgvr\" (UID: \"e869c352-adf6-4ae6-9a3e-9d35179d50a0\") " pod="openstack/glance-db-create-qjgvr" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.125332 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9da1b635-d49b-4f80-9e40-a70d5f899087-operator-scripts\") pod \"glance-fdc1-account-create-update-zp4v9\" (UID: \"9da1b635-d49b-4f80-9e40-a70d5f899087\") " pod="openstack/glance-fdc1-account-create-update-zp4v9" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.125533 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8z29\" (UniqueName: \"kubernetes.io/projected/e869c352-adf6-4ae6-9a3e-9d35179d50a0-kube-api-access-v8z29\") pod \"glance-db-create-qjgvr\" (UID: \"e869c352-adf6-4ae6-9a3e-9d35179d50a0\") " pod="openstack/glance-db-create-qjgvr" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.125638 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24pw5\" (UniqueName: \"kubernetes.io/projected/9da1b635-d49b-4f80-9e40-a70d5f899087-kube-api-access-24pw5\") pod \"glance-fdc1-account-create-update-zp4v9\" (UID: \"9da1b635-d49b-4f80-9e40-a70d5f899087\") " pod="openstack/glance-fdc1-account-create-update-zp4v9" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.126558 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9da1b635-d49b-4f80-9e40-a70d5f899087-operator-scripts\") pod \"glance-fdc1-account-create-update-zp4v9\" (UID: \"9da1b635-d49b-4f80-9e40-a70d5f899087\") " pod="openstack/glance-fdc1-account-create-update-zp4v9" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.147800 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24pw5\" (UniqueName: \"kubernetes.io/projected/9da1b635-d49b-4f80-9e40-a70d5f899087-kube-api-access-24pw5\") pod \"glance-fdc1-account-create-update-zp4v9\" (UID: \"9da1b635-d49b-4f80-9e40-a70d5f899087\") " pod="openstack/glance-fdc1-account-create-update-zp4v9" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.200688 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-gh6l2"] Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.202947 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gh6l2" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.208844 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gh6l2"] Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.209117 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fdc1-account-create-update-zp4v9" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.239103 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8z29\" (UniqueName: \"kubernetes.io/projected/e869c352-adf6-4ae6-9a3e-9d35179d50a0-kube-api-access-v8z29\") pod \"glance-db-create-qjgvr\" (UID: \"e869c352-adf6-4ae6-9a3e-9d35179d50a0\") " pod="openstack/glance-db-create-qjgvr" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.239243 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e869c352-adf6-4ae6-9a3e-9d35179d50a0-operator-scripts\") pod \"glance-db-create-qjgvr\" (UID: \"e869c352-adf6-4ae6-9a3e-9d35179d50a0\") " pod="openstack/glance-db-create-qjgvr" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.241014 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e869c352-adf6-4ae6-9a3e-9d35179d50a0-operator-scripts\") pod \"glance-db-create-qjgvr\" (UID: \"e869c352-adf6-4ae6-9a3e-9d35179d50a0\") " pod="openstack/glance-db-create-qjgvr" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.264800 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8z29\" (UniqueName: \"kubernetes.io/projected/e869c352-adf6-4ae6-9a3e-9d35179d50a0-kube-api-access-v8z29\") pod \"glance-db-create-qjgvr\" (UID: \"e869c352-adf6-4ae6-9a3e-9d35179d50a0\") " pod="openstack/glance-db-create-qjgvr" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.268785 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qjgvr" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.308877 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3cf6-account-create-update-rhwp9"] Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.310126 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3cf6-account-create-update-rhwp9" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.318922 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.332693 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3cf6-account-create-update-rhwp9"] Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.340616 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eddac82b-15c0-4bb8-96e4-9503ac536137-operator-scripts\") pod \"keystone-db-create-gh6l2\" (UID: \"eddac82b-15c0-4bb8-96e4-9503ac536137\") " pod="openstack/keystone-db-create-gh6l2" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.340865 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqk2h\" (UniqueName: \"kubernetes.io/projected/eddac82b-15c0-4bb8-96e4-9503ac536137-kube-api-access-xqk2h\") pod \"keystone-db-create-gh6l2\" (UID: \"eddac82b-15c0-4bb8-96e4-9503ac536137\") " pod="openstack/keystone-db-create-gh6l2" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.412045 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-lrlzd"] Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.413232 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lrlzd" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.420354 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lrlzd"] Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.461520 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbtm5\" (UniqueName: \"kubernetes.io/projected/53ece1f9-7f88-4f68-a37d-19436d21105c-kube-api-access-wbtm5\") pod \"placement-db-create-lrlzd\" (UID: \"53ece1f9-7f88-4f68-a37d-19436d21105c\") " pod="openstack/placement-db-create-lrlzd" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.467186 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b62ed95-9eed-4e65-979a-a2d24fd25ef7-operator-scripts\") pod \"keystone-3cf6-account-create-update-rhwp9\" (UID: \"4b62ed95-9eed-4e65-979a-a2d24fd25ef7\") " pod="openstack/keystone-3cf6-account-create-update-rhwp9" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.467439 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqk2h\" (UniqueName: \"kubernetes.io/projected/eddac82b-15c0-4bb8-96e4-9503ac536137-kube-api-access-xqk2h\") pod \"keystone-db-create-gh6l2\" (UID: \"eddac82b-15c0-4bb8-96e4-9503ac536137\") " pod="openstack/keystone-db-create-gh6l2" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.467553 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcswg\" (UniqueName: \"kubernetes.io/projected/4b62ed95-9eed-4e65-979a-a2d24fd25ef7-kube-api-access-zcswg\") pod \"keystone-3cf6-account-create-update-rhwp9\" (UID: \"4b62ed95-9eed-4e65-979a-a2d24fd25ef7\") " pod="openstack/keystone-3cf6-account-create-update-rhwp9" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.467659 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eddac82b-15c0-4bb8-96e4-9503ac536137-operator-scripts\") pod \"keystone-db-create-gh6l2\" (UID: \"eddac82b-15c0-4bb8-96e4-9503ac536137\") " pod="openstack/keystone-db-create-gh6l2" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.467713 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ece1f9-7f88-4f68-a37d-19436d21105c-operator-scripts\") pod \"placement-db-create-lrlzd\" (UID: \"53ece1f9-7f88-4f68-a37d-19436d21105c\") " pod="openstack/placement-db-create-lrlzd" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.469016 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eddac82b-15c0-4bb8-96e4-9503ac536137-operator-scripts\") pod \"keystone-db-create-gh6l2\" (UID: \"eddac82b-15c0-4bb8-96e4-9503ac536137\") " pod="openstack/keystone-db-create-gh6l2" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.522261 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqk2h\" (UniqueName: \"kubernetes.io/projected/eddac82b-15c0-4bb8-96e4-9503ac536137-kube-api-access-xqk2h\") pod \"keystone-db-create-gh6l2\" (UID: \"eddac82b-15c0-4bb8-96e4-9503ac536137\") " pod="openstack/keystone-db-create-gh6l2" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.526469 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-dbbf-account-create-update-v8ksh"] Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.528281 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dbbf-account-create-update-v8ksh" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.534859 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.542870 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dbbf-account-create-update-v8ksh"] Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.563538 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gh6l2" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.570092 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcswg\" (UniqueName: \"kubernetes.io/projected/4b62ed95-9eed-4e65-979a-a2d24fd25ef7-kube-api-access-zcswg\") pod \"keystone-3cf6-account-create-update-rhwp9\" (UID: \"4b62ed95-9eed-4e65-979a-a2d24fd25ef7\") " pod="openstack/keystone-3cf6-account-create-update-rhwp9" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.570159 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0cf4f6-40a2-4df6-bdf2-959b604512bc-operator-scripts\") pod \"placement-dbbf-account-create-update-v8ksh\" (UID: \"ed0cf4f6-40a2-4df6-bdf2-959b604512bc\") " pod="openstack/placement-dbbf-account-create-update-v8ksh" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.570256 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ece1f9-7f88-4f68-a37d-19436d21105c-operator-scripts\") pod \"placement-db-create-lrlzd\" (UID: \"53ece1f9-7f88-4f68-a37d-19436d21105c\") " pod="openstack/placement-db-create-lrlzd" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.570306 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ms7g\" (UniqueName: \"kubernetes.io/projected/ed0cf4f6-40a2-4df6-bdf2-959b604512bc-kube-api-access-8ms7g\") pod \"placement-dbbf-account-create-update-v8ksh\" (UID: \"ed0cf4f6-40a2-4df6-bdf2-959b604512bc\") " pod="openstack/placement-dbbf-account-create-update-v8ksh" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.570393 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbtm5\" (UniqueName: \"kubernetes.io/projected/53ece1f9-7f88-4f68-a37d-19436d21105c-kube-api-access-wbtm5\") pod \"placement-db-create-lrlzd\" (UID: \"53ece1f9-7f88-4f68-a37d-19436d21105c\") " pod="openstack/placement-db-create-lrlzd" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.570434 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b62ed95-9eed-4e65-979a-a2d24fd25ef7-operator-scripts\") pod \"keystone-3cf6-account-create-update-rhwp9\" (UID: \"4b62ed95-9eed-4e65-979a-a2d24fd25ef7\") " pod="openstack/keystone-3cf6-account-create-update-rhwp9" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.572094 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b62ed95-9eed-4e65-979a-a2d24fd25ef7-operator-scripts\") pod \"keystone-3cf6-account-create-update-rhwp9\" (UID: \"4b62ed95-9eed-4e65-979a-a2d24fd25ef7\") " pod="openstack/keystone-3cf6-account-create-update-rhwp9" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.573236 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ece1f9-7f88-4f68-a37d-19436d21105c-operator-scripts\") pod \"placement-db-create-lrlzd\" (UID: \"53ece1f9-7f88-4f68-a37d-19436d21105c\") " pod="openstack/placement-db-create-lrlzd" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.600362 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbtm5\" (UniqueName: \"kubernetes.io/projected/53ece1f9-7f88-4f68-a37d-19436d21105c-kube-api-access-wbtm5\") pod \"placement-db-create-lrlzd\" (UID: \"53ece1f9-7f88-4f68-a37d-19436d21105c\") " pod="openstack/placement-db-create-lrlzd" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.600658 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcswg\" (UniqueName: \"kubernetes.io/projected/4b62ed95-9eed-4e65-979a-a2d24fd25ef7-kube-api-access-zcswg\") pod \"keystone-3cf6-account-create-update-rhwp9\" (UID: \"4b62ed95-9eed-4e65-979a-a2d24fd25ef7\") " pod="openstack/keystone-3cf6-account-create-update-rhwp9" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.640000 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3cf6-account-create-update-rhwp9" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.671629 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ms7g\" (UniqueName: \"kubernetes.io/projected/ed0cf4f6-40a2-4df6-bdf2-959b604512bc-kube-api-access-8ms7g\") pod \"placement-dbbf-account-create-update-v8ksh\" (UID: \"ed0cf4f6-40a2-4df6-bdf2-959b604512bc\") " pod="openstack/placement-dbbf-account-create-update-v8ksh" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.672448 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0cf4f6-40a2-4df6-bdf2-959b604512bc-operator-scripts\") pod \"placement-dbbf-account-create-update-v8ksh\" (UID: \"ed0cf4f6-40a2-4df6-bdf2-959b604512bc\") " pod="openstack/placement-dbbf-account-create-update-v8ksh" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.673091 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0cf4f6-40a2-4df6-bdf2-959b604512bc-operator-scripts\") pod \"placement-dbbf-account-create-update-v8ksh\" (UID: \"ed0cf4f6-40a2-4df6-bdf2-959b604512bc\") " pod="openstack/placement-dbbf-account-create-update-v8ksh" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.689634 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ms7g\" (UniqueName: \"kubernetes.io/projected/ed0cf4f6-40a2-4df6-bdf2-959b604512bc-kube-api-access-8ms7g\") pod \"placement-dbbf-account-create-update-v8ksh\" (UID: \"ed0cf4f6-40a2-4df6-bdf2-959b604512bc\") " pod="openstack/placement-dbbf-account-create-update-v8ksh" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.734148 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lrlzd" Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.774777 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:24 crc kubenswrapper[4680]: E0217 18:01:24.775224 4680 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 18:01:24 crc kubenswrapper[4680]: E0217 18:01:24.775251 4680 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 18:01:24 crc kubenswrapper[4680]: E0217 18:01:24.775347 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift podName:54c53390-eb8a-4a06-9d50-7d0a4f10d3df nodeName:}" failed. No retries permitted until 2026-02-17 18:01:32.775309 +0000 UTC m=+1022.326207679 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift") pod "swift-storage-0" (UID: "54c53390-eb8a-4a06-9d50-7d0a4f10d3df") : configmap "swift-ring-files" not found Feb 17 18:01:24 crc kubenswrapper[4680]: I0217 18:01:24.858206 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dbbf-account-create-update-v8ksh" Feb 17 18:01:25 crc kubenswrapper[4680]: I0217 18:01:25.593895 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 18:01:26 crc kubenswrapper[4680]: I0217 18:01:26.101486 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:01:26 crc kubenswrapper[4680]: I0217 18:01:26.158984 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gbc6h"] Feb 17 18:01:26 crc kubenswrapper[4680]: I0217 18:01:26.159242 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" podUID="e2313798-4510-4dae-a8d1-d4f7c5c08627" containerName="dnsmasq-dns" containerID="cri-o://3050b61eeb673ac5a7ba39eab6b9217502f53cb7e809bf5bd108a588ac24c9ed" gracePeriod=10 Feb 17 18:01:26 crc kubenswrapper[4680]: I0217 18:01:26.255709 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3cf6-account-create-update-rhwp9"] Feb 17 18:01:26 crc kubenswrapper[4680]: I0217 18:01:26.479829 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3cf6-account-create-update-rhwp9" event={"ID":"4b62ed95-9eed-4e65-979a-a2d24fd25ef7","Type":"ContainerStarted","Data":"9ceca4fdabffb9a34805a847bab7446b913f305a2a72e250fad6ac100e742dd7"} Feb 17 18:01:26 crc kubenswrapper[4680]: I0217 18:01:26.482038 4680 generic.go:334] "Generic (PLEG): container finished" podID="e2313798-4510-4dae-a8d1-d4f7c5c08627" containerID="3050b61eeb673ac5a7ba39eab6b9217502f53cb7e809bf5bd108a588ac24c9ed" exitCode=0 Feb 17 18:01:26 crc kubenswrapper[4680]: I0217 18:01:26.482099 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" event={"ID":"e2313798-4510-4dae-a8d1-d4f7c5c08627","Type":"ContainerDied","Data":"3050b61eeb673ac5a7ba39eab6b9217502f53cb7e809bf5bd108a588ac24c9ed"} Feb 17 18:01:26 crc kubenswrapper[4680]: I0217 18:01:26.483511 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-b9q8k" event={"ID":"e281ffae-b948-40c0-b8a1-68805d081000","Type":"ContainerStarted","Data":"2332488253add4e9f13621868de62e260bcaec2bd2736e2ad8a101657d81d466"} Feb 17 18:01:26 crc kubenswrapper[4680]: I0217 18:01:26.540616 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-b9q8k" podStartSLOduration=2.370493059 podStartE2EDuration="6.540598973s" podCreationTimestamp="2026-02-17 18:01:20 +0000 UTC" firstStartedPulling="2026-02-17 18:01:21.625511691 +0000 UTC m=+1011.176410370" lastFinishedPulling="2026-02-17 18:01:25.795617605 +0000 UTC m=+1015.346516284" observedRunningTime="2026-02-17 18:01:26.540065166 +0000 UTC m=+1016.090963845" watchObservedRunningTime="2026-02-17 18:01:26.540598973 +0000 UTC m=+1016.091497662" Feb 17 18:01:26 crc kubenswrapper[4680]: I0217 18:01:26.596500 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dbbf-account-create-update-v8ksh"] Feb 17 18:01:26 crc kubenswrapper[4680]: I0217 18:01:26.625087 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gh6l2"] Feb 17 18:01:26 crc kubenswrapper[4680]: I0217 18:01:26.642065 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qjgvr"] Feb 17 18:01:26 crc kubenswrapper[4680]: W0217 18:01:26.651418 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9da1b635_d49b_4f80_9e40_a70d5f899087.slice/crio-a145edc0d339910b93730b291f7f088a8b646eed8fc48adb893440c4646b5de6 WatchSource:0}: Error finding container a145edc0d339910b93730b291f7f088a8b646eed8fc48adb893440c4646b5de6: Status 404 returned error can't find the container with id a145edc0d339910b93730b291f7f088a8b646eed8fc48adb893440c4646b5de6 Feb 17 18:01:26 crc kubenswrapper[4680]: I0217 18:01:26.686100 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-fdc1-account-create-update-zp4v9"] Feb 17 18:01:26 crc kubenswrapper[4680]: I0217 18:01:26.813444 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lrlzd"] Feb 17 18:01:26 crc kubenswrapper[4680]: I0217 18:01:26.928178 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:01:26 crc kubenswrapper[4680]: I0217 18:01:26.952441 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.058452 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-dns-svc\") pod \"e2313798-4510-4dae-a8d1-d4f7c5c08627\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.058560 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xplgn\" (UniqueName: \"kubernetes.io/projected/e2313798-4510-4dae-a8d1-d4f7c5c08627-kube-api-access-xplgn\") pod \"e2313798-4510-4dae-a8d1-d4f7c5c08627\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.058614 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-ovsdbserver-nb\") pod \"e2313798-4510-4dae-a8d1-d4f7c5c08627\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.058635 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-config\") pod \"e2313798-4510-4dae-a8d1-d4f7c5c08627\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.058694 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-ovsdbserver-sb\") pod \"e2313798-4510-4dae-a8d1-d4f7c5c08627\" (UID: \"e2313798-4510-4dae-a8d1-d4f7c5c08627\") " Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.065775 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2313798-4510-4dae-a8d1-d4f7c5c08627-kube-api-access-xplgn" (OuterVolumeSpecName: "kube-api-access-xplgn") pod "e2313798-4510-4dae-a8d1-d4f7c5c08627" (UID: "e2313798-4510-4dae-a8d1-d4f7c5c08627"). InnerVolumeSpecName "kube-api-access-xplgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.160984 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xplgn\" (UniqueName: \"kubernetes.io/projected/e2313798-4510-4dae-a8d1-d4f7c5c08627-kube-api-access-xplgn\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.161843 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e2313798-4510-4dae-a8d1-d4f7c5c08627" (UID: "e2313798-4510-4dae-a8d1-d4f7c5c08627"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.186143 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e2313798-4510-4dae-a8d1-d4f7c5c08627" (UID: "e2313798-4510-4dae-a8d1-d4f7c5c08627"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.197888 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-config" (OuterVolumeSpecName: "config") pod "e2313798-4510-4dae-a8d1-d4f7c5c08627" (UID: "e2313798-4510-4dae-a8d1-d4f7c5c08627"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.199685 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e2313798-4510-4dae-a8d1-d4f7c5c08627" (UID: "e2313798-4510-4dae-a8d1-d4f7c5c08627"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.262842 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.262872 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.262882 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.262890 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2313798-4510-4dae-a8d1-d4f7c5c08627-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.494043 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" event={"ID":"e2313798-4510-4dae-a8d1-d4f7c5c08627","Type":"ContainerDied","Data":"ecef0f25b6cc5e3704e793f10992a6a439c10b2bbb78bc8bf23cd97152099fa8"} Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.494121 4680 scope.go:117] "RemoveContainer" containerID="3050b61eeb673ac5a7ba39eab6b9217502f53cb7e809bf5bd108a588ac24c9ed" Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.494274 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gbc6h" Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.502383 4680 generic.go:334] "Generic (PLEG): container finished" podID="e869c352-adf6-4ae6-9a3e-9d35179d50a0" containerID="ca1edc19e814ae190aedcd7c6f0821e1b9ed05931b7b4d481f2412af68a9e4f8" exitCode=0 Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.502516 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qjgvr" event={"ID":"e869c352-adf6-4ae6-9a3e-9d35179d50a0","Type":"ContainerDied","Data":"ca1edc19e814ae190aedcd7c6f0821e1b9ed05931b7b4d481f2412af68a9e4f8"} Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.502553 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qjgvr" event={"ID":"e869c352-adf6-4ae6-9a3e-9d35179d50a0","Type":"ContainerStarted","Data":"4837d54184099f3af48de9bc0c9484a1d1373033c177783a766d69fcead160cd"} Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.508765 4680 generic.go:334] "Generic (PLEG): container finished" podID="ed0cf4f6-40a2-4df6-bdf2-959b604512bc" containerID="dd0f5ec7194c958992254c0c6b9aa5f38c359166404fc455741d5650f80748d2" exitCode=0 Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.508935 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dbbf-account-create-update-v8ksh" event={"ID":"ed0cf4f6-40a2-4df6-bdf2-959b604512bc","Type":"ContainerDied","Data":"dd0f5ec7194c958992254c0c6b9aa5f38c359166404fc455741d5650f80748d2"} Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.508967 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dbbf-account-create-update-v8ksh" event={"ID":"ed0cf4f6-40a2-4df6-bdf2-959b604512bc","Type":"ContainerStarted","Data":"d964048fa3cb9bb4306d6e5dfae8cc5b61332f786372e574a6bc6eb571488ce9"} Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.510541 4680 generic.go:334] "Generic (PLEG): container finished" podID="9da1b635-d49b-4f80-9e40-a70d5f899087" containerID="b21cb920497dcd61d66da81791c8596450d1fc5a3b3b1cff10c29c4b441ea616" exitCode=0 Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.510587 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fdc1-account-create-update-zp4v9" event={"ID":"9da1b635-d49b-4f80-9e40-a70d5f899087","Type":"ContainerDied","Data":"b21cb920497dcd61d66da81791c8596450d1fc5a3b3b1cff10c29c4b441ea616"} Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.510602 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fdc1-account-create-update-zp4v9" event={"ID":"9da1b635-d49b-4f80-9e40-a70d5f899087","Type":"ContainerStarted","Data":"a145edc0d339910b93730b291f7f088a8b646eed8fc48adb893440c4646b5de6"} Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.513435 4680 generic.go:334] "Generic (PLEG): container finished" podID="eddac82b-15c0-4bb8-96e4-9503ac536137" containerID="5b9e3080c416b010f32f7323b5bcdbe18aff9977b9ae167d0d605a7ba8691869" exitCode=0 Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.513474 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gh6l2" event={"ID":"eddac82b-15c0-4bb8-96e4-9503ac536137","Type":"ContainerDied","Data":"5b9e3080c416b010f32f7323b5bcdbe18aff9977b9ae167d0d605a7ba8691869"} Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.513526 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gh6l2" event={"ID":"eddac82b-15c0-4bb8-96e4-9503ac536137","Type":"ContainerStarted","Data":"03642c6c232b942a6da6ef79c2775c66a421c58e4dcf0aa77d2d095cc5e1dee2"} Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.518595 4680 generic.go:334] "Generic (PLEG): container finished" podID="4b62ed95-9eed-4e65-979a-a2d24fd25ef7" containerID="3df45c12bd4773e04d7241fc168a4763d742ff32685eea759dfc519a0190c90f" exitCode=0 Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.518741 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3cf6-account-create-update-rhwp9" event={"ID":"4b62ed95-9eed-4e65-979a-a2d24fd25ef7","Type":"ContainerDied","Data":"3df45c12bd4773e04d7241fc168a4763d742ff32685eea759dfc519a0190c90f"} Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.523739 4680 generic.go:334] "Generic (PLEG): container finished" podID="53ece1f9-7f88-4f68-a37d-19436d21105c" containerID="8016522a667f8e8b07b787799ccb97b3e1695eb7f240cfeb1073717565d6a71b" exitCode=0 Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.523788 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lrlzd" event={"ID":"53ece1f9-7f88-4f68-a37d-19436d21105c","Type":"ContainerDied","Data":"8016522a667f8e8b07b787799ccb97b3e1695eb7f240cfeb1073717565d6a71b"} Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.523841 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lrlzd" event={"ID":"53ece1f9-7f88-4f68-a37d-19436d21105c","Type":"ContainerStarted","Data":"ade6801fb64cc6dbaa4701cb3c75353263498afdaf745166ba944f5f061a0638"} Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.635128 4680 scope.go:117] "RemoveContainer" containerID="bf2175abc2d873532fae1e30ec8eccf35d37d5befb6053e66e68765be1f04080" Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.638694 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gbc6h"] Feb 17 18:01:27 crc kubenswrapper[4680]: I0217 18:01:27.645581 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gbc6h"] Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.004141 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lrlzd" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.119643 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2313798-4510-4dae-a8d1-d4f7c5c08627" path="/var/lib/kubelet/pods/e2313798-4510-4dae-a8d1-d4f7c5c08627/volumes" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.193086 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qjgvr" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.204006 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ece1f9-7f88-4f68-a37d-19436d21105c-operator-scripts\") pod \"53ece1f9-7f88-4f68-a37d-19436d21105c\" (UID: \"53ece1f9-7f88-4f68-a37d-19436d21105c\") " Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.204115 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbtm5\" (UniqueName: \"kubernetes.io/projected/53ece1f9-7f88-4f68-a37d-19436d21105c-kube-api-access-wbtm5\") pod \"53ece1f9-7f88-4f68-a37d-19436d21105c\" (UID: \"53ece1f9-7f88-4f68-a37d-19436d21105c\") " Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.205848 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53ece1f9-7f88-4f68-a37d-19436d21105c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53ece1f9-7f88-4f68-a37d-19436d21105c" (UID: "53ece1f9-7f88-4f68-a37d-19436d21105c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.206267 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3cf6-account-create-update-rhwp9" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.220978 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ece1f9-7f88-4f68-a37d-19436d21105c-kube-api-access-wbtm5" (OuterVolumeSpecName: "kube-api-access-wbtm5") pod "53ece1f9-7f88-4f68-a37d-19436d21105c" (UID: "53ece1f9-7f88-4f68-a37d-19436d21105c"). InnerVolumeSpecName "kube-api-access-wbtm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.222984 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dbbf-account-create-update-v8ksh" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.235965 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fdc1-account-create-update-zp4v9" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.238126 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gh6l2" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.306180 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8z29\" (UniqueName: \"kubernetes.io/projected/e869c352-adf6-4ae6-9a3e-9d35179d50a0-kube-api-access-v8z29\") pod \"e869c352-adf6-4ae6-9a3e-9d35179d50a0\" (UID: \"e869c352-adf6-4ae6-9a3e-9d35179d50a0\") " Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.306360 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e869c352-adf6-4ae6-9a3e-9d35179d50a0-operator-scripts\") pod \"e869c352-adf6-4ae6-9a3e-9d35179d50a0\" (UID: \"e869c352-adf6-4ae6-9a3e-9d35179d50a0\") " Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.307646 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ece1f9-7f88-4f68-a37d-19436d21105c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.307774 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbtm5\" (UniqueName: \"kubernetes.io/projected/53ece1f9-7f88-4f68-a37d-19436d21105c-kube-api-access-wbtm5\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.308173 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e869c352-adf6-4ae6-9a3e-9d35179d50a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e869c352-adf6-4ae6-9a3e-9d35179d50a0" (UID: "e869c352-adf6-4ae6-9a3e-9d35179d50a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.311475 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e869c352-adf6-4ae6-9a3e-9d35179d50a0-kube-api-access-v8z29" (OuterVolumeSpecName: "kube-api-access-v8z29") pod "e869c352-adf6-4ae6-9a3e-9d35179d50a0" (UID: "e869c352-adf6-4ae6-9a3e-9d35179d50a0"). InnerVolumeSpecName "kube-api-access-v8z29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.408784 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b62ed95-9eed-4e65-979a-a2d24fd25ef7-operator-scripts\") pod \"4b62ed95-9eed-4e65-979a-a2d24fd25ef7\" (UID: \"4b62ed95-9eed-4e65-979a-a2d24fd25ef7\") " Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.408846 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eddac82b-15c0-4bb8-96e4-9503ac536137-operator-scripts\") pod \"eddac82b-15c0-4bb8-96e4-9503ac536137\" (UID: \"eddac82b-15c0-4bb8-96e4-9503ac536137\") " Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.408876 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqk2h\" (UniqueName: \"kubernetes.io/projected/eddac82b-15c0-4bb8-96e4-9503ac536137-kube-api-access-xqk2h\") pod \"eddac82b-15c0-4bb8-96e4-9503ac536137\" (UID: \"eddac82b-15c0-4bb8-96e4-9503ac536137\") " Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.408917 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0cf4f6-40a2-4df6-bdf2-959b604512bc-operator-scripts\") pod \"ed0cf4f6-40a2-4df6-bdf2-959b604512bc\" (UID: \"ed0cf4f6-40a2-4df6-bdf2-959b604512bc\") " Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.408961 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9da1b635-d49b-4f80-9e40-a70d5f899087-operator-scripts\") pod \"9da1b635-d49b-4f80-9e40-a70d5f899087\" (UID: \"9da1b635-d49b-4f80-9e40-a70d5f899087\") " Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.408984 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcswg\" (UniqueName: \"kubernetes.io/projected/4b62ed95-9eed-4e65-979a-a2d24fd25ef7-kube-api-access-zcswg\") pod \"4b62ed95-9eed-4e65-979a-a2d24fd25ef7\" (UID: \"4b62ed95-9eed-4e65-979a-a2d24fd25ef7\") " Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.409017 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ms7g\" (UniqueName: \"kubernetes.io/projected/ed0cf4f6-40a2-4df6-bdf2-959b604512bc-kube-api-access-8ms7g\") pod \"ed0cf4f6-40a2-4df6-bdf2-959b604512bc\" (UID: \"ed0cf4f6-40a2-4df6-bdf2-959b604512bc\") " Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.409042 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24pw5\" (UniqueName: \"kubernetes.io/projected/9da1b635-d49b-4f80-9e40-a70d5f899087-kube-api-access-24pw5\") pod \"9da1b635-d49b-4f80-9e40-a70d5f899087\" (UID: \"9da1b635-d49b-4f80-9e40-a70d5f899087\") " Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.409358 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8z29\" (UniqueName: \"kubernetes.io/projected/e869c352-adf6-4ae6-9a3e-9d35179d50a0-kube-api-access-v8z29\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.409388 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e869c352-adf6-4ae6-9a3e-9d35179d50a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.409258 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b62ed95-9eed-4e65-979a-a2d24fd25ef7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b62ed95-9eed-4e65-979a-a2d24fd25ef7" (UID: "4b62ed95-9eed-4e65-979a-a2d24fd25ef7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.409931 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0cf4f6-40a2-4df6-bdf2-959b604512bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed0cf4f6-40a2-4df6-bdf2-959b604512bc" (UID: "ed0cf4f6-40a2-4df6-bdf2-959b604512bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.409951 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eddac82b-15c0-4bb8-96e4-9503ac536137-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eddac82b-15c0-4bb8-96e4-9503ac536137" (UID: "eddac82b-15c0-4bb8-96e4-9503ac536137"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.410102 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9da1b635-d49b-4f80-9e40-a70d5f899087-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9da1b635-d49b-4f80-9e40-a70d5f899087" (UID: "9da1b635-d49b-4f80-9e40-a70d5f899087"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.412614 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9da1b635-d49b-4f80-9e40-a70d5f899087-kube-api-access-24pw5" (OuterVolumeSpecName: "kube-api-access-24pw5") pod "9da1b635-d49b-4f80-9e40-a70d5f899087" (UID: "9da1b635-d49b-4f80-9e40-a70d5f899087"). InnerVolumeSpecName "kube-api-access-24pw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.412646 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b62ed95-9eed-4e65-979a-a2d24fd25ef7-kube-api-access-zcswg" (OuterVolumeSpecName: "kube-api-access-zcswg") pod "4b62ed95-9eed-4e65-979a-a2d24fd25ef7" (UID: "4b62ed95-9eed-4e65-979a-a2d24fd25ef7"). InnerVolumeSpecName "kube-api-access-zcswg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.412759 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed0cf4f6-40a2-4df6-bdf2-959b604512bc-kube-api-access-8ms7g" (OuterVolumeSpecName: "kube-api-access-8ms7g") pod "ed0cf4f6-40a2-4df6-bdf2-959b604512bc" (UID: "ed0cf4f6-40a2-4df6-bdf2-959b604512bc"). InnerVolumeSpecName "kube-api-access-8ms7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.413081 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eddac82b-15c0-4bb8-96e4-9503ac536137-kube-api-access-xqk2h" (OuterVolumeSpecName: "kube-api-access-xqk2h") pod "eddac82b-15c0-4bb8-96e4-9503ac536137" (UID: "eddac82b-15c0-4bb8-96e4-9503ac536137"). InnerVolumeSpecName "kube-api-access-xqk2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.510847 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b62ed95-9eed-4e65-979a-a2d24fd25ef7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.510889 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eddac82b-15c0-4bb8-96e4-9503ac536137-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.510901 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqk2h\" (UniqueName: \"kubernetes.io/projected/eddac82b-15c0-4bb8-96e4-9503ac536137-kube-api-access-xqk2h\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.510912 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0cf4f6-40a2-4df6-bdf2-959b604512bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.510921 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9da1b635-d49b-4f80-9e40-a70d5f899087-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.510932 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcswg\" (UniqueName: \"kubernetes.io/projected/4b62ed95-9eed-4e65-979a-a2d24fd25ef7-kube-api-access-zcswg\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.510941 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ms7g\" (UniqueName: \"kubernetes.io/projected/ed0cf4f6-40a2-4df6-bdf2-959b604512bc-kube-api-access-8ms7g\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.510949 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24pw5\" (UniqueName: \"kubernetes.io/projected/9da1b635-d49b-4f80-9e40-a70d5f899087-kube-api-access-24pw5\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.542403 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gh6l2" event={"ID":"eddac82b-15c0-4bb8-96e4-9503ac536137","Type":"ContainerDied","Data":"03642c6c232b942a6da6ef79c2775c66a421c58e4dcf0aa77d2d095cc5e1dee2"} Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.542689 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03642c6c232b942a6da6ef79c2775c66a421c58e4dcf0aa77d2d095cc5e1dee2" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.542418 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gh6l2" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.543719 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3cf6-account-create-update-rhwp9" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.543856 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3cf6-account-create-update-rhwp9" event={"ID":"4b62ed95-9eed-4e65-979a-a2d24fd25ef7","Type":"ContainerDied","Data":"9ceca4fdabffb9a34805a847bab7446b913f305a2a72e250fad6ac100e742dd7"} Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.543912 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ceca4fdabffb9a34805a847bab7446b913f305a2a72e250fad6ac100e742dd7" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.551980 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lrlzd" event={"ID":"53ece1f9-7f88-4f68-a37d-19436d21105c","Type":"ContainerDied","Data":"ade6801fb64cc6dbaa4701cb3c75353263498afdaf745166ba944f5f061a0638"} Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.552025 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ade6801fb64cc6dbaa4701cb3c75353263498afdaf745166ba944f5f061a0638" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.551988 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lrlzd" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.554001 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qjgvr" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.554149 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qjgvr" event={"ID":"e869c352-adf6-4ae6-9a3e-9d35179d50a0","Type":"ContainerDied","Data":"4837d54184099f3af48de9bc0c9484a1d1373033c177783a766d69fcead160cd"} Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.554244 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4837d54184099f3af48de9bc0c9484a1d1373033c177783a766d69fcead160cd" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.555270 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dbbf-account-create-update-v8ksh" event={"ID":"ed0cf4f6-40a2-4df6-bdf2-959b604512bc","Type":"ContainerDied","Data":"d964048fa3cb9bb4306d6e5dfae8cc5b61332f786372e574a6bc6eb571488ce9"} Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.555402 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d964048fa3cb9bb4306d6e5dfae8cc5b61332f786372e574a6bc6eb571488ce9" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.555501 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dbbf-account-create-update-v8ksh" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.557507 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fdc1-account-create-update-zp4v9" event={"ID":"9da1b635-d49b-4f80-9e40-a70d5f899087","Type":"ContainerDied","Data":"a145edc0d339910b93730b291f7f088a8b646eed8fc48adb893440c4646b5de6"} Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.557534 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a145edc0d339910b93730b291f7f088a8b646eed8fc48adb893440c4646b5de6" Feb 17 18:01:29 crc kubenswrapper[4680]: I0217 18:01:29.557571 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fdc1-account-create-update-zp4v9" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.229080 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xkn6f"] Feb 17 18:01:30 crc kubenswrapper[4680]: E0217 18:01:30.229449 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2313798-4510-4dae-a8d1-d4f7c5c08627" containerName="init" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.229466 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2313798-4510-4dae-a8d1-d4f7c5c08627" containerName="init" Feb 17 18:01:30 crc kubenswrapper[4680]: E0217 18:01:30.229508 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b62ed95-9eed-4e65-979a-a2d24fd25ef7" containerName="mariadb-account-create-update" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.229518 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b62ed95-9eed-4e65-979a-a2d24fd25ef7" containerName="mariadb-account-create-update" Feb 17 18:01:30 crc kubenswrapper[4680]: E0217 18:01:30.229529 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0cf4f6-40a2-4df6-bdf2-959b604512bc" containerName="mariadb-account-create-update" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.229539 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0cf4f6-40a2-4df6-bdf2-959b604512bc" containerName="mariadb-account-create-update" Feb 17 18:01:30 crc kubenswrapper[4680]: E0217 18:01:30.229554 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eddac82b-15c0-4bb8-96e4-9503ac536137" containerName="mariadb-database-create" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.229559 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="eddac82b-15c0-4bb8-96e4-9503ac536137" containerName="mariadb-database-create" Feb 17 18:01:30 crc kubenswrapper[4680]: E0217 18:01:30.229580 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da1b635-d49b-4f80-9e40-a70d5f899087" containerName="mariadb-account-create-update" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.229586 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da1b635-d49b-4f80-9e40-a70d5f899087" containerName="mariadb-account-create-update" Feb 17 18:01:30 crc kubenswrapper[4680]: E0217 18:01:30.229595 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e869c352-adf6-4ae6-9a3e-9d35179d50a0" containerName="mariadb-database-create" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.229601 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e869c352-adf6-4ae6-9a3e-9d35179d50a0" containerName="mariadb-database-create" Feb 17 18:01:30 crc kubenswrapper[4680]: E0217 18:01:30.229611 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ece1f9-7f88-4f68-a37d-19436d21105c" containerName="mariadb-database-create" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.229617 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ece1f9-7f88-4f68-a37d-19436d21105c" containerName="mariadb-database-create" Feb 17 18:01:30 crc kubenswrapper[4680]: E0217 18:01:30.229632 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2313798-4510-4dae-a8d1-d4f7c5c08627" containerName="dnsmasq-dns" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.229638 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2313798-4510-4dae-a8d1-d4f7c5c08627" containerName="dnsmasq-dns" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.229825 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ece1f9-7f88-4f68-a37d-19436d21105c" containerName="mariadb-database-create" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.229838 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2313798-4510-4dae-a8d1-d4f7c5c08627" containerName="dnsmasq-dns" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.229849 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="eddac82b-15c0-4bb8-96e4-9503ac536137" containerName="mariadb-database-create" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.229857 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e869c352-adf6-4ae6-9a3e-9d35179d50a0" containerName="mariadb-database-create" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.229863 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da1b635-d49b-4f80-9e40-a70d5f899087" containerName="mariadb-account-create-update" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.229870 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0cf4f6-40a2-4df6-bdf2-959b604512bc" containerName="mariadb-account-create-update" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.229882 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b62ed95-9eed-4e65-979a-a2d24fd25ef7" containerName="mariadb-account-create-update" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.230391 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xkn6f" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.234394 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.254095 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xkn6f"] Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.426167 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24hpl\" (UniqueName: \"kubernetes.io/projected/094acb88-dba8-4795-9179-abb75e7348cc-kube-api-access-24hpl\") pod \"root-account-create-update-xkn6f\" (UID: \"094acb88-dba8-4795-9179-abb75e7348cc\") " pod="openstack/root-account-create-update-xkn6f" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.426240 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/094acb88-dba8-4795-9179-abb75e7348cc-operator-scripts\") pod \"root-account-create-update-xkn6f\" (UID: \"094acb88-dba8-4795-9179-abb75e7348cc\") " pod="openstack/root-account-create-update-xkn6f" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.528161 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24hpl\" (UniqueName: \"kubernetes.io/projected/094acb88-dba8-4795-9179-abb75e7348cc-kube-api-access-24hpl\") pod \"root-account-create-update-xkn6f\" (UID: \"094acb88-dba8-4795-9179-abb75e7348cc\") " pod="openstack/root-account-create-update-xkn6f" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.528209 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/094acb88-dba8-4795-9179-abb75e7348cc-operator-scripts\") pod \"root-account-create-update-xkn6f\" (UID: \"094acb88-dba8-4795-9179-abb75e7348cc\") " pod="openstack/root-account-create-update-xkn6f" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.529284 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/094acb88-dba8-4795-9179-abb75e7348cc-operator-scripts\") pod \"root-account-create-update-xkn6f\" (UID: \"094acb88-dba8-4795-9179-abb75e7348cc\") " pod="openstack/root-account-create-update-xkn6f" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.547189 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24hpl\" (UniqueName: \"kubernetes.io/projected/094acb88-dba8-4795-9179-abb75e7348cc-kube-api-access-24hpl\") pod \"root-account-create-update-xkn6f\" (UID: \"094acb88-dba8-4795-9179-abb75e7348cc\") " pod="openstack/root-account-create-update-xkn6f" Feb 17 18:01:30 crc kubenswrapper[4680]: I0217 18:01:30.846327 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xkn6f" Feb 17 18:01:31 crc kubenswrapper[4680]: I0217 18:01:31.276123 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xkn6f"] Feb 17 18:01:31 crc kubenswrapper[4680]: I0217 18:01:31.288809 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 17 18:01:31 crc kubenswrapper[4680]: I0217 18:01:31.587023 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xkn6f" event={"ID":"094acb88-dba8-4795-9179-abb75e7348cc","Type":"ContainerStarted","Data":"acacd5c6b4ae7f30a10d418918338f7071b41f9a3c126603192f50f06efed3e7"} Feb 17 18:01:31 crc kubenswrapper[4680]: I0217 18:01:31.587073 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xkn6f" event={"ID":"094acb88-dba8-4795-9179-abb75e7348cc","Type":"ContainerStarted","Data":"f440e815146c0a844acf1ca4ad7f5e54e1c0cb7722e0fc59912058499a0dd22d"} Feb 17 18:01:31 crc kubenswrapper[4680]: I0217 18:01:31.602043 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-xkn6f" podStartSLOduration=1.6020260450000001 podStartE2EDuration="1.602026045s" podCreationTimestamp="2026-02-17 18:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:01:31.600105785 +0000 UTC m=+1021.151004464" watchObservedRunningTime="2026-02-17 18:01:31.602026045 +0000 UTC m=+1021.152924734" Feb 17 18:01:32 crc kubenswrapper[4680]: I0217 18:01:32.593601 4680 generic.go:334] "Generic (PLEG): container finished" podID="094acb88-dba8-4795-9179-abb75e7348cc" containerID="acacd5c6b4ae7f30a10d418918338f7071b41f9a3c126603192f50f06efed3e7" exitCode=0 Feb 17 18:01:32 crc kubenswrapper[4680]: I0217 18:01:32.593646 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xkn6f" event={"ID":"094acb88-dba8-4795-9179-abb75e7348cc","Type":"ContainerDied","Data":"acacd5c6b4ae7f30a10d418918338f7071b41f9a3c126603192f50f06efed3e7"} Feb 17 18:01:32 crc kubenswrapper[4680]: I0217 18:01:32.867357 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:32 crc kubenswrapper[4680]: E0217 18:01:32.867542 4680 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 18:01:32 crc kubenswrapper[4680]: E0217 18:01:32.867570 4680 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 18:01:32 crc kubenswrapper[4680]: E0217 18:01:32.867607 4680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift podName:54c53390-eb8a-4a06-9d50-7d0a4f10d3df nodeName:}" failed. No retries permitted until 2026-02-17 18:01:48.867594758 +0000 UTC m=+1038.418493437 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift") pod "swift-storage-0" (UID: "54c53390-eb8a-4a06-9d50-7d0a4f10d3df") : configmap "swift-ring-files" not found Feb 17 18:01:33 crc kubenswrapper[4680]: I0217 18:01:33.911235 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xkn6f" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.088847 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/094acb88-dba8-4795-9179-abb75e7348cc-operator-scripts\") pod \"094acb88-dba8-4795-9179-abb75e7348cc\" (UID: \"094acb88-dba8-4795-9179-abb75e7348cc\") " Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.088952 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24hpl\" (UniqueName: \"kubernetes.io/projected/094acb88-dba8-4795-9179-abb75e7348cc-kube-api-access-24hpl\") pod \"094acb88-dba8-4795-9179-abb75e7348cc\" (UID: \"094acb88-dba8-4795-9179-abb75e7348cc\") " Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.089695 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/094acb88-dba8-4795-9179-abb75e7348cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "094acb88-dba8-4795-9179-abb75e7348cc" (UID: "094acb88-dba8-4795-9179-abb75e7348cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.096821 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/094acb88-dba8-4795-9179-abb75e7348cc-kube-api-access-24hpl" (OuterVolumeSpecName: "kube-api-access-24hpl") pod "094acb88-dba8-4795-9179-abb75e7348cc" (UID: "094acb88-dba8-4795-9179-abb75e7348cc"). InnerVolumeSpecName "kube-api-access-24hpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.143475 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-dnfft"] Feb 17 18:01:34 crc kubenswrapper[4680]: E0217 18:01:34.143820 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="094acb88-dba8-4795-9179-abb75e7348cc" containerName="mariadb-account-create-update" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.143834 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="094acb88-dba8-4795-9179-abb75e7348cc" containerName="mariadb-account-create-update" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.144018 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="094acb88-dba8-4795-9179-abb75e7348cc" containerName="mariadb-account-create-update" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.144539 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dnfft" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.147893 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.149257 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-429lj" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.159758 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dnfft"] Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.191122 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-config-data\") pod \"glance-db-sync-dnfft\" (UID: \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\") " pod="openstack/glance-db-sync-dnfft" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.191179 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-db-sync-config-data\") pod \"glance-db-sync-dnfft\" (UID: \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\") " pod="openstack/glance-db-sync-dnfft" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.191297 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-combined-ca-bundle\") pod \"glance-db-sync-dnfft\" (UID: \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\") " pod="openstack/glance-db-sync-dnfft" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.191448 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n46ll\" (UniqueName: \"kubernetes.io/projected/4c1a34b3-87e0-4584-8a46-18320fe88a7e-kube-api-access-n46ll\") pod \"glance-db-sync-dnfft\" (UID: \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\") " pod="openstack/glance-db-sync-dnfft" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.191567 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24hpl\" (UniqueName: \"kubernetes.io/projected/094acb88-dba8-4795-9179-abb75e7348cc-kube-api-access-24hpl\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.191585 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/094acb88-dba8-4795-9179-abb75e7348cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.292839 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n46ll\" (UniqueName: \"kubernetes.io/projected/4c1a34b3-87e0-4584-8a46-18320fe88a7e-kube-api-access-n46ll\") pod \"glance-db-sync-dnfft\" (UID: \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\") " pod="openstack/glance-db-sync-dnfft" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.292924 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-config-data\") pod \"glance-db-sync-dnfft\" (UID: \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\") " pod="openstack/glance-db-sync-dnfft" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.292960 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-db-sync-config-data\") pod \"glance-db-sync-dnfft\" (UID: \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\") " pod="openstack/glance-db-sync-dnfft" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.293016 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-combined-ca-bundle\") pod \"glance-db-sync-dnfft\" (UID: \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\") " pod="openstack/glance-db-sync-dnfft" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.299945 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-db-sync-config-data\") pod \"glance-db-sync-dnfft\" (UID: \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\") " pod="openstack/glance-db-sync-dnfft" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.302181 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-config-data\") pod \"glance-db-sync-dnfft\" (UID: \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\") " pod="openstack/glance-db-sync-dnfft" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.307507 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-combined-ca-bundle\") pod \"glance-db-sync-dnfft\" (UID: \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\") " pod="openstack/glance-db-sync-dnfft" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.316448 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n46ll\" (UniqueName: \"kubernetes.io/projected/4c1a34b3-87e0-4584-8a46-18320fe88a7e-kube-api-access-n46ll\") pod \"glance-db-sync-dnfft\" (UID: \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\") " pod="openstack/glance-db-sync-dnfft" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.458596 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dnfft" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.514763 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7dz9r" podUID="2ce593ef-6a97-4361-af72-7637711c07b5" containerName="ovn-controller" probeResult="failure" output=< Feb 17 18:01:34 crc kubenswrapper[4680]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 18:01:34 crc kubenswrapper[4680]: > Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.554835 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.608808 4680 generic.go:334] "Generic (PLEG): container finished" podID="e281ffae-b948-40c0-b8a1-68805d081000" containerID="2332488253add4e9f13621868de62e260bcaec2bd2736e2ad8a101657d81d466" exitCode=0 Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.608870 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-b9q8k" event={"ID":"e281ffae-b948-40c0-b8a1-68805d081000","Type":"ContainerDied","Data":"2332488253add4e9f13621868de62e260bcaec2bd2736e2ad8a101657d81d466"} Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.614811 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xkn6f" event={"ID":"094acb88-dba8-4795-9179-abb75e7348cc","Type":"ContainerDied","Data":"f440e815146c0a844acf1ca4ad7f5e54e1c0cb7722e0fc59912058499a0dd22d"} Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.614867 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f440e815146c0a844acf1ca4ad7f5e54e1c0cb7722e0fc59912058499a0dd22d" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.614821 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xkn6f" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.634090 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-grk2z" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.860482 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7dz9r-config-sjgkg"] Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.861944 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.871054 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7dz9r-config-sjgkg"] Feb 17 18:01:34 crc kubenswrapper[4680]: I0217 18:01:34.875635 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.007128 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-run-ovn\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.007709 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-additional-scripts\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.008124 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-scripts\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.008197 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56722\" (UniqueName: \"kubernetes.io/projected/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-kube-api-access-56722\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.008218 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-log-ovn\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.008255 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-run\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.059695 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dnfft"] Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.109113 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-run\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.109173 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-run-ovn\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.109216 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-additional-scripts\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.109270 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-scripts\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.109330 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56722\" (UniqueName: \"kubernetes.io/projected/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-kube-api-access-56722\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.109352 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-log-ovn\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.109699 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-log-ovn\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.109985 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-run\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.110043 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-run-ovn\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.110206 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-additional-scripts\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.113537 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-scripts\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.132011 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56722\" (UniqueName: \"kubernetes.io/projected/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-kube-api-access-56722\") pod \"ovn-controller-7dz9r-config-sjgkg\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.234789 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.624104 4680 generic.go:334] "Generic (PLEG): container finished" podID="81eeaf28-d5ac-44e3-bad2-215fc89f4895" containerID="a92f4b4af5c0b60983ad5e0d0a6a5a254d4b153918b5ee06eb42a9e9354726b2" exitCode=0 Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.624151 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"81eeaf28-d5ac-44e3-bad2-215fc89f4895","Type":"ContainerDied","Data":"a92f4b4af5c0b60983ad5e0d0a6a5a254d4b153918b5ee06eb42a9e9354726b2"} Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.626669 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dnfft" event={"ID":"4c1a34b3-87e0-4584-8a46-18320fe88a7e","Type":"ContainerStarted","Data":"fe3bf54f77bd051766bc67bafe3c605b50e29e1f5400474207c028d85d06b12a"} Feb 17 18:01:35 crc kubenswrapper[4680]: I0217 18:01:35.700691 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7dz9r-config-sjgkg"] Feb 17 18:01:35 crc kubenswrapper[4680]: W0217 18:01:35.709740 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacdbcdac_4d93_48d1_8562_647fbb3cbcfa.slice/crio-5984d838a0e341844d2e1d8304c972a2e339f71457c2f1165a7f14426f16ffcd WatchSource:0}: Error finding container 5984d838a0e341844d2e1d8304c972a2e339f71457c2f1165a7f14426f16ffcd: Status 404 returned error can't find the container with id 5984d838a0e341844d2e1d8304c972a2e339f71457c2f1165a7f14426f16ffcd Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.024670 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.129567 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-dispersionconf\") pod \"e281ffae-b948-40c0-b8a1-68805d081000\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.129637 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e281ffae-b948-40c0-b8a1-68805d081000-ring-data-devices\") pod \"e281ffae-b948-40c0-b8a1-68805d081000\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.129676 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e281ffae-b948-40c0-b8a1-68805d081000-scripts\") pod \"e281ffae-b948-40c0-b8a1-68805d081000\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.129711 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e281ffae-b948-40c0-b8a1-68805d081000-etc-swift\") pod \"e281ffae-b948-40c0-b8a1-68805d081000\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.129771 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj6pr\" (UniqueName: \"kubernetes.io/projected/e281ffae-b948-40c0-b8a1-68805d081000-kube-api-access-dj6pr\") pod \"e281ffae-b948-40c0-b8a1-68805d081000\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.129820 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-combined-ca-bundle\") pod \"e281ffae-b948-40c0-b8a1-68805d081000\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.129922 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-swiftconf\") pod \"e281ffae-b948-40c0-b8a1-68805d081000\" (UID: \"e281ffae-b948-40c0-b8a1-68805d081000\") " Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.131263 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e281ffae-b948-40c0-b8a1-68805d081000-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "e281ffae-b948-40c0-b8a1-68805d081000" (UID: "e281ffae-b948-40c0-b8a1-68805d081000"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.132100 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e281ffae-b948-40c0-b8a1-68805d081000-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e281ffae-b948-40c0-b8a1-68805d081000" (UID: "e281ffae-b948-40c0-b8a1-68805d081000"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.146099 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e281ffae-b948-40c0-b8a1-68805d081000-kube-api-access-dj6pr" (OuterVolumeSpecName: "kube-api-access-dj6pr") pod "e281ffae-b948-40c0-b8a1-68805d081000" (UID: "e281ffae-b948-40c0-b8a1-68805d081000"). InnerVolumeSpecName "kube-api-access-dj6pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.158121 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "e281ffae-b948-40c0-b8a1-68805d081000" (UID: "e281ffae-b948-40c0-b8a1-68805d081000"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.161613 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "e281ffae-b948-40c0-b8a1-68805d081000" (UID: "e281ffae-b948-40c0-b8a1-68805d081000"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.180780 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e281ffae-b948-40c0-b8a1-68805d081000-scripts" (OuterVolumeSpecName: "scripts") pod "e281ffae-b948-40c0-b8a1-68805d081000" (UID: "e281ffae-b948-40c0-b8a1-68805d081000"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.185074 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e281ffae-b948-40c0-b8a1-68805d081000" (UID: "e281ffae-b948-40c0-b8a1-68805d081000"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.231456 4680 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.231490 4680 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.231500 4680 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e281ffae-b948-40c0-b8a1-68805d081000-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.231510 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e281ffae-b948-40c0-b8a1-68805d081000-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.231518 4680 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e281ffae-b948-40c0-b8a1-68805d081000-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.231530 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj6pr\" (UniqueName: \"kubernetes.io/projected/e281ffae-b948-40c0-b8a1-68805d081000-kube-api-access-dj6pr\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.231540 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e281ffae-b948-40c0-b8a1-68805d081000-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.637048 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"81eeaf28-d5ac-44e3-bad2-215fc89f4895","Type":"ContainerStarted","Data":"4b1591e94fbc9b31e89aaf9fe2f4c5c4b820531ab45c09e9bfa78284c4de6ac0"} Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.637994 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.639674 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-b9q8k" event={"ID":"e281ffae-b948-40c0-b8a1-68805d081000","Type":"ContainerDied","Data":"8efc7a323002c580bf788726e365ea9aa4e0b3cd7f4e09a7ac6c94b06b849817"} Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.639704 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8efc7a323002c580bf788726e365ea9aa4e0b3cd7f4e09a7ac6c94b06b849817" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.639769 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-b9q8k" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.641865 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7dz9r-config-sjgkg" event={"ID":"acdbcdac-4d93-48d1-8562-647fbb3cbcfa","Type":"ContainerStarted","Data":"e5a1e9e8d3ffe21cd9b6c1a53d1986f7eb2af21f038d5fac24501d29df104f01"} Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.641899 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7dz9r-config-sjgkg" event={"ID":"acdbcdac-4d93-48d1-8562-647fbb3cbcfa","Type":"ContainerStarted","Data":"5984d838a0e341844d2e1d8304c972a2e339f71457c2f1165a7f14426f16ffcd"} Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.666795 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.131186002 podStartE2EDuration="1m17.666773732s" podCreationTimestamp="2026-02-17 18:00:19 +0000 UTC" firstStartedPulling="2026-02-17 18:00:21.559550516 +0000 UTC m=+951.110449195" lastFinishedPulling="2026-02-17 18:01:02.095138246 +0000 UTC m=+991.646036925" observedRunningTime="2026-02-17 18:01:36.655041134 +0000 UTC m=+1026.205939803" watchObservedRunningTime="2026-02-17 18:01:36.666773732 +0000 UTC m=+1026.217672411" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.701201 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7dz9r-config-sjgkg" podStartSLOduration=2.7011848929999998 podStartE2EDuration="2.701184893s" podCreationTimestamp="2026-02-17 18:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:01:36.69759472 +0000 UTC m=+1026.248493419" watchObservedRunningTime="2026-02-17 18:01:36.701184893 +0000 UTC m=+1026.252083572" Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.811065 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xkn6f"] Feb 17 18:01:36 crc kubenswrapper[4680]: I0217 18:01:36.818499 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xkn6f"] Feb 17 18:01:37 crc kubenswrapper[4680]: I0217 18:01:37.123018 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="094acb88-dba8-4795-9179-abb75e7348cc" path="/var/lib/kubelet/pods/094acb88-dba8-4795-9179-abb75e7348cc/volumes" Feb 17 18:01:37 crc kubenswrapper[4680]: I0217 18:01:37.657493 4680 generic.go:334] "Generic (PLEG): container finished" podID="acdbcdac-4d93-48d1-8562-647fbb3cbcfa" containerID="e5a1e9e8d3ffe21cd9b6c1a53d1986f7eb2af21f038d5fac24501d29df104f01" exitCode=0 Feb 17 18:01:37 crc kubenswrapper[4680]: I0217 18:01:37.657597 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7dz9r-config-sjgkg" event={"ID":"acdbcdac-4d93-48d1-8562-647fbb3cbcfa","Type":"ContainerDied","Data":"e5a1e9e8d3ffe21cd9b6c1a53d1986f7eb2af21f038d5fac24501d29df104f01"} Feb 17 18:01:37 crc kubenswrapper[4680]: I0217 18:01:37.667082 4680 generic.go:334] "Generic (PLEG): container finished" podID="8d9f1e89-7561-458e-9189-73106b757079" containerID="bbec7f40f1f0a58d80579676478398525fbc7efbaaf6ef30a0bd41d67e9482c8" exitCode=0 Feb 17 18:01:37 crc kubenswrapper[4680]: I0217 18:01:37.667184 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d9f1e89-7561-458e-9189-73106b757079","Type":"ContainerDied","Data":"bbec7f40f1f0a58d80579676478398525fbc7efbaaf6ef30a0bd41d67e9482c8"} Feb 17 18:01:38 crc kubenswrapper[4680]: I0217 18:01:38.678398 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d9f1e89-7561-458e-9189-73106b757079","Type":"ContainerStarted","Data":"de7d8587516a3e472941c02d37a182987c7c0ee3823ff9f91bb7fe57d292b479"} Feb 17 18:01:38 crc kubenswrapper[4680]: I0217 18:01:38.678940 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:01:38 crc kubenswrapper[4680]: I0217 18:01:38.713364 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371957.141436 podStartE2EDuration="1m19.713341032s" podCreationTimestamp="2026-02-17 18:00:19 +0000 UTC" firstStartedPulling="2026-02-17 18:00:21.86115064 +0000 UTC m=+951.412049319" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:01:38.707808918 +0000 UTC m=+1028.258707597" watchObservedRunningTime="2026-02-17 18:01:38.713341032 +0000 UTC m=+1028.264239711" Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.062139 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.189494 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-run-ovn\") pod \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.189613 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56722\" (UniqueName: \"kubernetes.io/projected/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-kube-api-access-56722\") pod \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.189710 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-scripts\") pod \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.189806 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-run\") pod \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.189845 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-additional-scripts\") pod \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.189919 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-log-ovn\") pod \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\" (UID: \"acdbcdac-4d93-48d1-8562-647fbb3cbcfa\") " Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.190549 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "acdbcdac-4d93-48d1-8562-647fbb3cbcfa" (UID: "acdbcdac-4d93-48d1-8562-647fbb3cbcfa"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.190598 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "acdbcdac-4d93-48d1-8562-647fbb3cbcfa" (UID: "acdbcdac-4d93-48d1-8562-647fbb3cbcfa"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.191191 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-run" (OuterVolumeSpecName: "var-run") pod "acdbcdac-4d93-48d1-8562-647fbb3cbcfa" (UID: "acdbcdac-4d93-48d1-8562-647fbb3cbcfa"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.191886 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "acdbcdac-4d93-48d1-8562-647fbb3cbcfa" (UID: "acdbcdac-4d93-48d1-8562-647fbb3cbcfa"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.193115 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-scripts" (OuterVolumeSpecName: "scripts") pod "acdbcdac-4d93-48d1-8562-647fbb3cbcfa" (UID: "acdbcdac-4d93-48d1-8562-647fbb3cbcfa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.197841 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-kube-api-access-56722" (OuterVolumeSpecName: "kube-api-access-56722") pod "acdbcdac-4d93-48d1-8562-647fbb3cbcfa" (UID: "acdbcdac-4d93-48d1-8562-647fbb3cbcfa"). InnerVolumeSpecName "kube-api-access-56722". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.292177 4680 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-run\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.292210 4680 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.292226 4680 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.292237 4680 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.292250 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56722\" (UniqueName: \"kubernetes.io/projected/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-kube-api-access-56722\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.292283 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acdbcdac-4d93-48d1-8562-647fbb3cbcfa-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.489743 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-7dz9r" Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.692036 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7dz9r-config-sjgkg" Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.697398 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7dz9r-config-sjgkg" event={"ID":"acdbcdac-4d93-48d1-8562-647fbb3cbcfa","Type":"ContainerDied","Data":"5984d838a0e341844d2e1d8304c972a2e339f71457c2f1165a7f14426f16ffcd"} Feb 17 18:01:39 crc kubenswrapper[4680]: I0217 18:01:39.697460 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5984d838a0e341844d2e1d8304c972a2e339f71457c2f1165a7f14426f16ffcd" Feb 17 18:01:40 crc kubenswrapper[4680]: I0217 18:01:40.170905 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7dz9r-config-sjgkg"] Feb 17 18:01:40 crc kubenswrapper[4680]: I0217 18:01:40.177148 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7dz9r-config-sjgkg"] Feb 17 18:01:41 crc kubenswrapper[4680]: I0217 18:01:41.115068 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acdbcdac-4d93-48d1-8562-647fbb3cbcfa" path="/var/lib/kubelet/pods/acdbcdac-4d93-48d1-8562-647fbb3cbcfa/volumes" Feb 17 18:01:41 crc kubenswrapper[4680]: I0217 18:01:41.838085 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ss6nw"] Feb 17 18:01:41 crc kubenswrapper[4680]: E0217 18:01:41.839551 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e281ffae-b948-40c0-b8a1-68805d081000" containerName="swift-ring-rebalance" Feb 17 18:01:41 crc kubenswrapper[4680]: I0217 18:01:41.839573 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e281ffae-b948-40c0-b8a1-68805d081000" containerName="swift-ring-rebalance" Feb 17 18:01:41 crc kubenswrapper[4680]: E0217 18:01:41.839609 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acdbcdac-4d93-48d1-8562-647fbb3cbcfa" containerName="ovn-config" Feb 17 18:01:41 crc kubenswrapper[4680]: I0217 18:01:41.839623 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="acdbcdac-4d93-48d1-8562-647fbb3cbcfa" containerName="ovn-config" Feb 17 18:01:41 crc kubenswrapper[4680]: I0217 18:01:41.839783 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e281ffae-b948-40c0-b8a1-68805d081000" containerName="swift-ring-rebalance" Feb 17 18:01:41 crc kubenswrapper[4680]: I0217 18:01:41.839803 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="acdbcdac-4d93-48d1-8562-647fbb3cbcfa" containerName="ovn-config" Feb 17 18:01:41 crc kubenswrapper[4680]: I0217 18:01:41.840384 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ss6nw" Feb 17 18:01:41 crc kubenswrapper[4680]: I0217 18:01:41.843672 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 18:01:41 crc kubenswrapper[4680]: I0217 18:01:41.847620 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ss6nw"] Feb 17 18:01:41 crc kubenswrapper[4680]: I0217 18:01:41.943341 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t4lj\" (UniqueName: \"kubernetes.io/projected/bb237184-93a1-48b5-8ae7-bfccb72ddb6c-kube-api-access-9t4lj\") pod \"root-account-create-update-ss6nw\" (UID: \"bb237184-93a1-48b5-8ae7-bfccb72ddb6c\") " pod="openstack/root-account-create-update-ss6nw" Feb 17 18:01:41 crc kubenswrapper[4680]: I0217 18:01:41.943472 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb237184-93a1-48b5-8ae7-bfccb72ddb6c-operator-scripts\") pod \"root-account-create-update-ss6nw\" (UID: \"bb237184-93a1-48b5-8ae7-bfccb72ddb6c\") " pod="openstack/root-account-create-update-ss6nw" Feb 17 18:01:42 crc kubenswrapper[4680]: I0217 18:01:42.045793 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t4lj\" (UniqueName: \"kubernetes.io/projected/bb237184-93a1-48b5-8ae7-bfccb72ddb6c-kube-api-access-9t4lj\") pod \"root-account-create-update-ss6nw\" (UID: \"bb237184-93a1-48b5-8ae7-bfccb72ddb6c\") " pod="openstack/root-account-create-update-ss6nw" Feb 17 18:01:42 crc kubenswrapper[4680]: I0217 18:01:42.046386 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb237184-93a1-48b5-8ae7-bfccb72ddb6c-operator-scripts\") pod \"root-account-create-update-ss6nw\" (UID: \"bb237184-93a1-48b5-8ae7-bfccb72ddb6c\") " pod="openstack/root-account-create-update-ss6nw" Feb 17 18:01:42 crc kubenswrapper[4680]: I0217 18:01:42.047179 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb237184-93a1-48b5-8ae7-bfccb72ddb6c-operator-scripts\") pod \"root-account-create-update-ss6nw\" (UID: \"bb237184-93a1-48b5-8ae7-bfccb72ddb6c\") " pod="openstack/root-account-create-update-ss6nw" Feb 17 18:01:42 crc kubenswrapper[4680]: I0217 18:01:42.074288 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t4lj\" (UniqueName: \"kubernetes.io/projected/bb237184-93a1-48b5-8ae7-bfccb72ddb6c-kube-api-access-9t4lj\") pod \"root-account-create-update-ss6nw\" (UID: \"bb237184-93a1-48b5-8ae7-bfccb72ddb6c\") " pod="openstack/root-account-create-update-ss6nw" Feb 17 18:01:42 crc kubenswrapper[4680]: I0217 18:01:42.167311 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ss6nw" Feb 17 18:01:48 crc kubenswrapper[4680]: I0217 18:01:48.902676 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:48 crc kubenswrapper[4680]: I0217 18:01:48.920728 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54c53390-eb8a-4a06-9d50-7d0a4f10d3df-etc-swift\") pod \"swift-storage-0\" (UID: \"54c53390-eb8a-4a06-9d50-7d0a4f10d3df\") " pod="openstack/swift-storage-0" Feb 17 18:01:49 crc kubenswrapper[4680]: I0217 18:01:49.199160 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 18:01:49 crc kubenswrapper[4680]: I0217 18:01:49.952026 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ss6nw"] Feb 17 18:01:49 crc kubenswrapper[4680]: W0217 18:01:49.955002 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb237184_93a1_48b5_8ae7_bfccb72ddb6c.slice/crio-746b7e600d1953dd93c9e43c4f7ec79fc64ca12b1493e4f157bf4606e13fac61 WatchSource:0}: Error finding container 746b7e600d1953dd93c9e43c4f7ec79fc64ca12b1493e4f157bf4606e13fac61: Status 404 returned error can't find the container with id 746b7e600d1953dd93c9e43c4f7ec79fc64ca12b1493e4f157bf4606e13fac61 Feb 17 18:01:50 crc kubenswrapper[4680]: I0217 18:01:50.134517 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 18:01:50 crc kubenswrapper[4680]: I0217 18:01:50.788385 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dnfft" event={"ID":"4c1a34b3-87e0-4584-8a46-18320fe88a7e","Type":"ContainerStarted","Data":"6fb39025c05e3edcf45598fd9c8259e9e07bfa52f700036a907cc2c3b7c577cc"} Feb 17 18:01:50 crc kubenswrapper[4680]: I0217 18:01:50.790367 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54c53390-eb8a-4a06-9d50-7d0a4f10d3df","Type":"ContainerStarted","Data":"cc00c0c2c594fcb6e3d92abcab46cf6094bd4b8dff5ba77b67963cf530802f23"} Feb 17 18:01:50 crc kubenswrapper[4680]: I0217 18:01:50.795466 4680 generic.go:334] "Generic (PLEG): container finished" podID="bb237184-93a1-48b5-8ae7-bfccb72ddb6c" containerID="484cf52dc7dcabfd9107b439da5050e2c2f67dc4e690aa8bb04062303583bf71" exitCode=0 Feb 17 18:01:50 crc kubenswrapper[4680]: I0217 18:01:50.795522 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ss6nw" event={"ID":"bb237184-93a1-48b5-8ae7-bfccb72ddb6c","Type":"ContainerDied","Data":"484cf52dc7dcabfd9107b439da5050e2c2f67dc4e690aa8bb04062303583bf71"} Feb 17 18:01:50 crc kubenswrapper[4680]: I0217 18:01:50.795554 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ss6nw" event={"ID":"bb237184-93a1-48b5-8ae7-bfccb72ddb6c","Type":"ContainerStarted","Data":"746b7e600d1953dd93c9e43c4f7ec79fc64ca12b1493e4f157bf4606e13fac61"} Feb 17 18:01:50 crc kubenswrapper[4680]: I0217 18:01:50.818534 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-dnfft" podStartSLOduration=2.294987457 podStartE2EDuration="16.818518311s" podCreationTimestamp="2026-02-17 18:01:34 +0000 UTC" firstStartedPulling="2026-02-17 18:01:35.069713289 +0000 UTC m=+1024.620611968" lastFinishedPulling="2026-02-17 18:01:49.593244143 +0000 UTC m=+1039.144142822" observedRunningTime="2026-02-17 18:01:50.815087543 +0000 UTC m=+1040.365986222" watchObservedRunningTime="2026-02-17 18:01:50.818518311 +0000 UTC m=+1040.369416990" Feb 17 18:01:50 crc kubenswrapper[4680]: I0217 18:01:50.824484 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.045799 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.328226 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-ck6vb"] Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.329484 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ck6vb" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.345794 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ck6vb"] Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.415067 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-72aa-account-create-update-ssg2c"] Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.423774 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-72aa-account-create-update-ssg2c" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.430845 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.440687 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-72aa-account-create-update-ssg2c"] Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.471435 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/065d2f14-a873-4ca1-b0aa-6bde51d5aec0-operator-scripts\") pod \"cinder-db-create-ck6vb\" (UID: \"065d2f14-a873-4ca1-b0aa-6bde51d5aec0\") " pod="openstack/cinder-db-create-ck6vb" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.471757 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6w2z\" (UniqueName: \"kubernetes.io/projected/065d2f14-a873-4ca1-b0aa-6bde51d5aec0-kube-api-access-p6w2z\") pod \"cinder-db-create-ck6vb\" (UID: \"065d2f14-a873-4ca1-b0aa-6bde51d5aec0\") " pod="openstack/cinder-db-create-ck6vb" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.573958 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff9aefee-2f32-417a-a2e1-4cb309fc6c19-operator-scripts\") pod \"cinder-72aa-account-create-update-ssg2c\" (UID: \"ff9aefee-2f32-417a-a2e1-4cb309fc6c19\") " pod="openstack/cinder-72aa-account-create-update-ssg2c" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.574025 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6w2z\" (UniqueName: \"kubernetes.io/projected/065d2f14-a873-4ca1-b0aa-6bde51d5aec0-kube-api-access-p6w2z\") pod \"cinder-db-create-ck6vb\" (UID: \"065d2f14-a873-4ca1-b0aa-6bde51d5aec0\") " pod="openstack/cinder-db-create-ck6vb" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.574212 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd6qf\" (UniqueName: \"kubernetes.io/projected/ff9aefee-2f32-417a-a2e1-4cb309fc6c19-kube-api-access-fd6qf\") pod \"cinder-72aa-account-create-update-ssg2c\" (UID: \"ff9aefee-2f32-417a-a2e1-4cb309fc6c19\") " pod="openstack/cinder-72aa-account-create-update-ssg2c" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.574458 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/065d2f14-a873-4ca1-b0aa-6bde51d5aec0-operator-scripts\") pod \"cinder-db-create-ck6vb\" (UID: \"065d2f14-a873-4ca1-b0aa-6bde51d5aec0\") " pod="openstack/cinder-db-create-ck6vb" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.575232 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/065d2f14-a873-4ca1-b0aa-6bde51d5aec0-operator-scripts\") pod \"cinder-db-create-ck6vb\" (UID: \"065d2f14-a873-4ca1-b0aa-6bde51d5aec0\") " pod="openstack/cinder-db-create-ck6vb" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.584152 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-ncbx6"] Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.585433 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ncbx6" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.598466 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ncbx6"] Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.612980 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6w2z\" (UniqueName: \"kubernetes.io/projected/065d2f14-a873-4ca1-b0aa-6bde51d5aec0-kube-api-access-p6w2z\") pod \"cinder-db-create-ck6vb\" (UID: \"065d2f14-a873-4ca1-b0aa-6bde51d5aec0\") " pod="openstack/cinder-db-create-ck6vb" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.679585 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6z5d\" (UniqueName: \"kubernetes.io/projected/46799f27-6bac-4031-9277-7310bf49adbc-kube-api-access-h6z5d\") pod \"barbican-db-create-ncbx6\" (UID: \"46799f27-6bac-4031-9277-7310bf49adbc\") " pod="openstack/barbican-db-create-ncbx6" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.679692 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46799f27-6bac-4031-9277-7310bf49adbc-operator-scripts\") pod \"barbican-db-create-ncbx6\" (UID: \"46799f27-6bac-4031-9277-7310bf49adbc\") " pod="openstack/barbican-db-create-ncbx6" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.679729 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd6qf\" (UniqueName: \"kubernetes.io/projected/ff9aefee-2f32-417a-a2e1-4cb309fc6c19-kube-api-access-fd6qf\") pod \"cinder-72aa-account-create-update-ssg2c\" (UID: \"ff9aefee-2f32-417a-a2e1-4cb309fc6c19\") " pod="openstack/cinder-72aa-account-create-update-ssg2c" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.680126 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff9aefee-2f32-417a-a2e1-4cb309fc6c19-operator-scripts\") pod \"cinder-72aa-account-create-update-ssg2c\" (UID: \"ff9aefee-2f32-417a-a2e1-4cb309fc6c19\") " pod="openstack/cinder-72aa-account-create-update-ssg2c" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.681021 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff9aefee-2f32-417a-a2e1-4cb309fc6c19-operator-scripts\") pod \"cinder-72aa-account-create-update-ssg2c\" (UID: \"ff9aefee-2f32-417a-a2e1-4cb309fc6c19\") " pod="openstack/cinder-72aa-account-create-update-ssg2c" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.686753 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-96c9q"] Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.688449 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-96c9q" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.691046 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ck6vb" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.709739 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.710114 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.738927 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.740067 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fww75" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.755280 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd6qf\" (UniqueName: \"kubernetes.io/projected/ff9aefee-2f32-417a-a2e1-4cb309fc6c19-kube-api-access-fd6qf\") pod \"cinder-72aa-account-create-update-ssg2c\" (UID: \"ff9aefee-2f32-417a-a2e1-4cb309fc6c19\") " pod="openstack/cinder-72aa-account-create-update-ssg2c" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.755670 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-96c9q"] Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.772446 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-988e-account-create-update-5qtck"] Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.774026 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-988e-account-create-update-5qtck" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.779383 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-988e-account-create-update-5qtck"] Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.785022 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw2jj\" (UniqueName: \"kubernetes.io/projected/ce8c3e62-2beb-456a-b77b-386213f4bf98-kube-api-access-qw2jj\") pod \"keystone-db-sync-96c9q\" (UID: \"ce8c3e62-2beb-456a-b77b-386213f4bf98\") " pod="openstack/keystone-db-sync-96c9q" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.788098 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce8c3e62-2beb-456a-b77b-386213f4bf98-combined-ca-bundle\") pod \"keystone-db-sync-96c9q\" (UID: \"ce8c3e62-2beb-456a-b77b-386213f4bf98\") " pod="openstack/keystone-db-sync-96c9q" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.788335 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6z5d\" (UniqueName: \"kubernetes.io/projected/46799f27-6bac-4031-9277-7310bf49adbc-kube-api-access-h6z5d\") pod \"barbican-db-create-ncbx6\" (UID: \"46799f27-6bac-4031-9277-7310bf49adbc\") " pod="openstack/barbican-db-create-ncbx6" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.795782 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46799f27-6bac-4031-9277-7310bf49adbc-operator-scripts\") pod \"barbican-db-create-ncbx6\" (UID: \"46799f27-6bac-4031-9277-7310bf49adbc\") " pod="openstack/barbican-db-create-ncbx6" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.796042 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce8c3e62-2beb-456a-b77b-386213f4bf98-config-data\") pod \"keystone-db-sync-96c9q\" (UID: \"ce8c3e62-2beb-456a-b77b-386213f4bf98\") " pod="openstack/keystone-db-sync-96c9q" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.793361 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.797129 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46799f27-6bac-4031-9277-7310bf49adbc-operator-scripts\") pod \"barbican-db-create-ncbx6\" (UID: \"46799f27-6bac-4031-9277-7310bf49adbc\") " pod="openstack/barbican-db-create-ncbx6" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.809381 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lvrxv"] Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.810838 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lvrxv" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.834672 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6z5d\" (UniqueName: \"kubernetes.io/projected/46799f27-6bac-4031-9277-7310bf49adbc-kube-api-access-h6z5d\") pod \"barbican-db-create-ncbx6\" (UID: \"46799f27-6bac-4031-9277-7310bf49adbc\") " pod="openstack/barbican-db-create-ncbx6" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.846495 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lvrxv"] Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.898091 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce8c3e62-2beb-456a-b77b-386213f4bf98-combined-ca-bundle\") pod \"keystone-db-sync-96c9q\" (UID: \"ce8c3e62-2beb-456a-b77b-386213f4bf98\") " pod="openstack/keystone-db-sync-96c9q" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.901255 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgwhd\" (UniqueName: \"kubernetes.io/projected/d0686b19-fda5-4f55-88b4-f831de276058-kube-api-access-wgwhd\") pod \"barbican-988e-account-create-update-5qtck\" (UID: \"d0686b19-fda5-4f55-88b4-f831de276058\") " pod="openstack/barbican-988e-account-create-update-5qtck" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.901424 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce8c3e62-2beb-456a-b77b-386213f4bf98-config-data\") pod \"keystone-db-sync-96c9q\" (UID: \"ce8c3e62-2beb-456a-b77b-386213f4bf98\") " pod="openstack/keystone-db-sync-96c9q" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.901582 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw2jj\" (UniqueName: \"kubernetes.io/projected/ce8c3e62-2beb-456a-b77b-386213f4bf98-kube-api-access-qw2jj\") pod \"keystone-db-sync-96c9q\" (UID: \"ce8c3e62-2beb-456a-b77b-386213f4bf98\") " pod="openstack/keystone-db-sync-96c9q" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.901691 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e03ce69-e84a-4e27-bdc6-48625c11da9f-operator-scripts\") pod \"neutron-db-create-lvrxv\" (UID: \"8e03ce69-e84a-4e27-bdc6-48625c11da9f\") " pod="openstack/neutron-db-create-lvrxv" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.901803 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkznh\" (UniqueName: \"kubernetes.io/projected/8e03ce69-e84a-4e27-bdc6-48625c11da9f-kube-api-access-jkznh\") pod \"neutron-db-create-lvrxv\" (UID: \"8e03ce69-e84a-4e27-bdc6-48625c11da9f\") " pod="openstack/neutron-db-create-lvrxv" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.901907 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0686b19-fda5-4f55-88b4-f831de276058-operator-scripts\") pod \"barbican-988e-account-create-update-5qtck\" (UID: \"d0686b19-fda5-4f55-88b4-f831de276058\") " pod="openstack/barbican-988e-account-create-update-5qtck" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.931670 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce8c3e62-2beb-456a-b77b-386213f4bf98-config-data\") pod \"keystone-db-sync-96c9q\" (UID: \"ce8c3e62-2beb-456a-b77b-386213f4bf98\") " pod="openstack/keystone-db-sync-96c9q" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.936927 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce8c3e62-2beb-456a-b77b-386213f4bf98-combined-ca-bundle\") pod \"keystone-db-sync-96c9q\" (UID: \"ce8c3e62-2beb-456a-b77b-386213f4bf98\") " pod="openstack/keystone-db-sync-96c9q" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.957369 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw2jj\" (UniqueName: \"kubernetes.io/projected/ce8c3e62-2beb-456a-b77b-386213f4bf98-kube-api-access-qw2jj\") pod \"keystone-db-sync-96c9q\" (UID: \"ce8c3e62-2beb-456a-b77b-386213f4bf98\") " pod="openstack/keystone-db-sync-96c9q" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.965127 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-379c-account-create-update-phc6b"] Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.966337 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-379c-account-create-update-phc6b" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.968978 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.975732 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ncbx6" Feb 17 18:01:51 crc kubenswrapper[4680]: I0217 18:01:51.983126 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-379c-account-create-update-phc6b"] Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.004289 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e03ce69-e84a-4e27-bdc6-48625c11da9f-operator-scripts\") pod \"neutron-db-create-lvrxv\" (UID: \"8e03ce69-e84a-4e27-bdc6-48625c11da9f\") " pod="openstack/neutron-db-create-lvrxv" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.005187 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkznh\" (UniqueName: \"kubernetes.io/projected/8e03ce69-e84a-4e27-bdc6-48625c11da9f-kube-api-access-jkznh\") pod \"neutron-db-create-lvrxv\" (UID: \"8e03ce69-e84a-4e27-bdc6-48625c11da9f\") " pod="openstack/neutron-db-create-lvrxv" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.005461 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0686b19-fda5-4f55-88b4-f831de276058-operator-scripts\") pod \"barbican-988e-account-create-update-5qtck\" (UID: \"d0686b19-fda5-4f55-88b4-f831de276058\") " pod="openstack/barbican-988e-account-create-update-5qtck" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.005600 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgwhd\" (UniqueName: \"kubernetes.io/projected/d0686b19-fda5-4f55-88b4-f831de276058-kube-api-access-wgwhd\") pod \"barbican-988e-account-create-update-5qtck\" (UID: \"d0686b19-fda5-4f55-88b4-f831de276058\") " pod="openstack/barbican-988e-account-create-update-5qtck" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.005116 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e03ce69-e84a-4e27-bdc6-48625c11da9f-operator-scripts\") pod \"neutron-db-create-lvrxv\" (UID: \"8e03ce69-e84a-4e27-bdc6-48625c11da9f\") " pod="openstack/neutron-db-create-lvrxv" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.006164 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0686b19-fda5-4f55-88b4-f831de276058-operator-scripts\") pod \"barbican-988e-account-create-update-5qtck\" (UID: \"d0686b19-fda5-4f55-88b4-f831de276058\") " pod="openstack/barbican-988e-account-create-update-5qtck" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.015726 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-96c9q" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.027090 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkznh\" (UniqueName: \"kubernetes.io/projected/8e03ce69-e84a-4e27-bdc6-48625c11da9f-kube-api-access-jkznh\") pod \"neutron-db-create-lvrxv\" (UID: \"8e03ce69-e84a-4e27-bdc6-48625c11da9f\") " pod="openstack/neutron-db-create-lvrxv" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.030078 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgwhd\" (UniqueName: \"kubernetes.io/projected/d0686b19-fda5-4f55-88b4-f831de276058-kube-api-access-wgwhd\") pod \"barbican-988e-account-create-update-5qtck\" (UID: \"d0686b19-fda5-4f55-88b4-f831de276058\") " pod="openstack/barbican-988e-account-create-update-5qtck" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.052041 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-72aa-account-create-update-ssg2c" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.106635 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/904c1e1e-004a-4ec1-8981-2bcbbbbea7e0-operator-scripts\") pod \"neutron-379c-account-create-update-phc6b\" (UID: \"904c1e1e-004a-4ec1-8981-2bcbbbbea7e0\") " pod="openstack/neutron-379c-account-create-update-phc6b" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.106951 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlqp2\" (UniqueName: \"kubernetes.io/projected/904c1e1e-004a-4ec1-8981-2bcbbbbea7e0-kube-api-access-vlqp2\") pod \"neutron-379c-account-create-update-phc6b\" (UID: \"904c1e1e-004a-4ec1-8981-2bcbbbbea7e0\") " pod="openstack/neutron-379c-account-create-update-phc6b" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.122595 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-988e-account-create-update-5qtck" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.226201 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/904c1e1e-004a-4ec1-8981-2bcbbbbea7e0-operator-scripts\") pod \"neutron-379c-account-create-update-phc6b\" (UID: \"904c1e1e-004a-4ec1-8981-2bcbbbbea7e0\") " pod="openstack/neutron-379c-account-create-update-phc6b" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.226416 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlqp2\" (UniqueName: \"kubernetes.io/projected/904c1e1e-004a-4ec1-8981-2bcbbbbea7e0-kube-api-access-vlqp2\") pod \"neutron-379c-account-create-update-phc6b\" (UID: \"904c1e1e-004a-4ec1-8981-2bcbbbbea7e0\") " pod="openstack/neutron-379c-account-create-update-phc6b" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.228162 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/904c1e1e-004a-4ec1-8981-2bcbbbbea7e0-operator-scripts\") pod \"neutron-379c-account-create-update-phc6b\" (UID: \"904c1e1e-004a-4ec1-8981-2bcbbbbea7e0\") " pod="openstack/neutron-379c-account-create-update-phc6b" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.254185 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lvrxv" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.262012 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlqp2\" (UniqueName: \"kubernetes.io/projected/904c1e1e-004a-4ec1-8981-2bcbbbbea7e0-kube-api-access-vlqp2\") pod \"neutron-379c-account-create-update-phc6b\" (UID: \"904c1e1e-004a-4ec1-8981-2bcbbbbea7e0\") " pod="openstack/neutron-379c-account-create-update-phc6b" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.300167 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-379c-account-create-update-phc6b" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.306936 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ss6nw" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.438453 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t4lj\" (UniqueName: \"kubernetes.io/projected/bb237184-93a1-48b5-8ae7-bfccb72ddb6c-kube-api-access-9t4lj\") pod \"bb237184-93a1-48b5-8ae7-bfccb72ddb6c\" (UID: \"bb237184-93a1-48b5-8ae7-bfccb72ddb6c\") " Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.438581 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb237184-93a1-48b5-8ae7-bfccb72ddb6c-operator-scripts\") pod \"bb237184-93a1-48b5-8ae7-bfccb72ddb6c\" (UID: \"bb237184-93a1-48b5-8ae7-bfccb72ddb6c\") " Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.440246 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb237184-93a1-48b5-8ae7-bfccb72ddb6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb237184-93a1-48b5-8ae7-bfccb72ddb6c" (UID: "bb237184-93a1-48b5-8ae7-bfccb72ddb6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.450464 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb237184-93a1-48b5-8ae7-bfccb72ddb6c-kube-api-access-9t4lj" (OuterVolumeSpecName: "kube-api-access-9t4lj") pod "bb237184-93a1-48b5-8ae7-bfccb72ddb6c" (UID: "bb237184-93a1-48b5-8ae7-bfccb72ddb6c"). InnerVolumeSpecName "kube-api-access-9t4lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.541660 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t4lj\" (UniqueName: \"kubernetes.io/projected/bb237184-93a1-48b5-8ae7-bfccb72ddb6c-kube-api-access-9t4lj\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.541707 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb237184-93a1-48b5-8ae7-bfccb72ddb6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.577480 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ck6vb"] Feb 17 18:01:52 crc kubenswrapper[4680]: W0217 18:01:52.617604 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod065d2f14_a873_4ca1_b0aa_6bde51d5aec0.slice/crio-0fbd2519f33b7993e8deec324cbf43815b468dd198dc7482be9fdccfd5585b74 WatchSource:0}: Error finding container 0fbd2519f33b7993e8deec324cbf43815b468dd198dc7482be9fdccfd5585b74: Status 404 returned error can't find the container with id 0fbd2519f33b7993e8deec324cbf43815b468dd198dc7482be9fdccfd5585b74 Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.834397 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ck6vb" event={"ID":"065d2f14-a873-4ca1-b0aa-6bde51d5aec0","Type":"ContainerStarted","Data":"0fbd2519f33b7993e8deec324cbf43815b468dd198dc7482be9fdccfd5585b74"} Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.850129 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54c53390-eb8a-4a06-9d50-7d0a4f10d3df","Type":"ContainerStarted","Data":"3ffed09eab5c4b35f5cc747444d39c4e5e99600399343720829bbb9cb0b2fa2f"} Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.850178 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54c53390-eb8a-4a06-9d50-7d0a4f10d3df","Type":"ContainerStarted","Data":"dd21ba804a7cfc2c728997ea5d5d588c8121a3b54de6e8ae45057f8c760cb6d7"} Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.868700 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ss6nw" event={"ID":"bb237184-93a1-48b5-8ae7-bfccb72ddb6c","Type":"ContainerDied","Data":"746b7e600d1953dd93c9e43c4f7ec79fc64ca12b1493e4f157bf4606e13fac61"} Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.868767 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="746b7e600d1953dd93c9e43c4f7ec79fc64ca12b1493e4f157bf4606e13fac61" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.868904 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ss6nw" Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.959224 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ncbx6"] Feb 17 18:01:52 crc kubenswrapper[4680]: W0217 18:01:52.981365 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46799f27_6bac_4031_9277_7310bf49adbc.slice/crio-c33567530702fb7fdb523c2ee53d21846a5e703d0ca5be213dc0bff35242531c WatchSource:0}: Error finding container c33567530702fb7fdb523c2ee53d21846a5e703d0ca5be213dc0bff35242531c: Status 404 returned error can't find the container with id c33567530702fb7fdb523c2ee53d21846a5e703d0ca5be213dc0bff35242531c Feb 17 18:01:52 crc kubenswrapper[4680]: I0217 18:01:52.986488 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-96c9q"] Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.153971 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-72aa-account-create-update-ssg2c"] Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.179865 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-988e-account-create-update-5qtck"] Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.188651 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lvrxv"] Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.383872 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-379c-account-create-update-phc6b"] Feb 17 18:01:53 crc kubenswrapper[4680]: W0217 18:01:53.486716 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod904c1e1e_004a_4ec1_8981_2bcbbbbea7e0.slice/crio-a63104fd8fecf40dcfff89c7d111d2fa8bcaf33b3480f7911c9e4c7b6e9b4fac WatchSource:0}: Error finding container a63104fd8fecf40dcfff89c7d111d2fa8bcaf33b3480f7911c9e4c7b6e9b4fac: Status 404 returned error can't find the container with id a63104fd8fecf40dcfff89c7d111d2fa8bcaf33b3480f7911c9e4c7b6e9b4fac Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.898040 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-988e-account-create-update-5qtck" event={"ID":"d0686b19-fda5-4f55-88b4-f831de276058","Type":"ContainerStarted","Data":"8c0b9330037ebc4b13964cceaf9af9d288a48b9fb5a2e07c04580bc9834cfd37"} Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.898403 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-988e-account-create-update-5qtck" event={"ID":"d0686b19-fda5-4f55-88b4-f831de276058","Type":"ContainerStarted","Data":"3f43492aa63a87440ca28e040c1e2f7acf748abefa9467142e7c573070b4ba4b"} Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.909580 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-96c9q" event={"ID":"ce8c3e62-2beb-456a-b77b-386213f4bf98","Type":"ContainerStarted","Data":"045ca1c7647c6eefe83e7c9e0212e3ff9225304fc2475d14bece280f8575d081"} Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.921002 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-72aa-account-create-update-ssg2c" event={"ID":"ff9aefee-2f32-417a-a2e1-4cb309fc6c19","Type":"ContainerStarted","Data":"7bb236f312bb3bfa1c8f9aafa99550ae6f38fbccb021cd18207627d42a061fd4"} Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.921044 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-72aa-account-create-update-ssg2c" event={"ID":"ff9aefee-2f32-417a-a2e1-4cb309fc6c19","Type":"ContainerStarted","Data":"e35c983edcb810092fc3499024cf72ded6281a68148d652e2350b42f074cd35f"} Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.925055 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-988e-account-create-update-5qtck" podStartSLOduration=2.925038011 podStartE2EDuration="2.925038011s" podCreationTimestamp="2026-02-17 18:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:01:53.915155051 +0000 UTC m=+1043.466053730" watchObservedRunningTime="2026-02-17 18:01:53.925038011 +0000 UTC m=+1043.475936720" Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.927126 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lvrxv" event={"ID":"8e03ce69-e84a-4e27-bdc6-48625c11da9f","Type":"ContainerStarted","Data":"2b5fc39d7b02d82f2a0866709f1df8f7ca2cf377367c88bdf8a4438d06da3987"} Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.927170 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lvrxv" event={"ID":"8e03ce69-e84a-4e27-bdc6-48625c11da9f","Type":"ContainerStarted","Data":"c13c2e0025d4276cf662cb6be4ed9e63cfe47cd250379b4c5e0782fa80585f4f"} Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.943771 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-72aa-account-create-update-ssg2c" podStartSLOduration=2.943746848 podStartE2EDuration="2.943746848s" podCreationTimestamp="2026-02-17 18:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:01:53.939735322 +0000 UTC m=+1043.490634001" watchObservedRunningTime="2026-02-17 18:01:53.943746848 +0000 UTC m=+1043.494645537" Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.944259 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-379c-account-create-update-phc6b" event={"ID":"904c1e1e-004a-4ec1-8981-2bcbbbbea7e0","Type":"ContainerStarted","Data":"4fdcbf45c1e97f6a68666baf33df630aaca8de21af46e4058c34e571c69be445"} Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.944355 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-379c-account-create-update-phc6b" event={"ID":"904c1e1e-004a-4ec1-8981-2bcbbbbea7e0","Type":"ContainerStarted","Data":"a63104fd8fecf40dcfff89c7d111d2fa8bcaf33b3480f7911c9e4c7b6e9b4fac"} Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.950520 4680 generic.go:334] "Generic (PLEG): container finished" podID="46799f27-6bac-4031-9277-7310bf49adbc" containerID="de7b12c3fcef1c67e78a398deafd767a418da626e6641de0fe2801c8a01f696e" exitCode=0 Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.950667 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ncbx6" event={"ID":"46799f27-6bac-4031-9277-7310bf49adbc","Type":"ContainerDied","Data":"de7b12c3fcef1c67e78a398deafd767a418da626e6641de0fe2801c8a01f696e"} Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.950691 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ncbx6" event={"ID":"46799f27-6bac-4031-9277-7310bf49adbc","Type":"ContainerStarted","Data":"c33567530702fb7fdb523c2ee53d21846a5e703d0ca5be213dc0bff35242531c"} Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.955756 4680 generic.go:334] "Generic (PLEG): container finished" podID="065d2f14-a873-4ca1-b0aa-6bde51d5aec0" containerID="f85a3cba93c57c0aaae7f3708c676699085b6811c2404eaf8b13976d7b9c6a84" exitCode=0 Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.955820 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ck6vb" event={"ID":"065d2f14-a873-4ca1-b0aa-6bde51d5aec0","Type":"ContainerDied","Data":"f85a3cba93c57c0aaae7f3708c676699085b6811c2404eaf8b13976d7b9c6a84"} Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.974670 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-lvrxv" podStartSLOduration=2.974652178 podStartE2EDuration="2.974652178s" podCreationTimestamp="2026-02-17 18:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:01:53.958553103 +0000 UTC m=+1043.509451782" watchObservedRunningTime="2026-02-17 18:01:53.974652178 +0000 UTC m=+1043.525550857" Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.976343 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54c53390-eb8a-4a06-9d50-7d0a4f10d3df","Type":"ContainerStarted","Data":"bb761105ebc65c100ff410885594541aa243947581a6b0065299ede0fd9b0c8d"} Feb 17 18:01:53 crc kubenswrapper[4680]: I0217 18:01:53.976381 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54c53390-eb8a-4a06-9d50-7d0a4f10d3df","Type":"ContainerStarted","Data":"6f7e579ac284779b23dc6d3a586651752fa71e9188d48da8092577d20cac4077"} Feb 17 18:01:54 crc kubenswrapper[4680]: I0217 18:01:54.014240 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-379c-account-create-update-phc6b" podStartSLOduration=3.01421702 podStartE2EDuration="3.01421702s" podCreationTimestamp="2026-02-17 18:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:01:54.003977588 +0000 UTC m=+1043.554876287" watchObservedRunningTime="2026-02-17 18:01:54.01421702 +0000 UTC m=+1043.565115699" Feb 17 18:01:54 crc kubenswrapper[4680]: I0217 18:01:54.997239 4680 generic.go:334] "Generic (PLEG): container finished" podID="8e03ce69-e84a-4e27-bdc6-48625c11da9f" containerID="2b5fc39d7b02d82f2a0866709f1df8f7ca2cf377367c88bdf8a4438d06da3987" exitCode=0 Feb 17 18:01:54 crc kubenswrapper[4680]: I0217 18:01:54.997577 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lvrxv" event={"ID":"8e03ce69-e84a-4e27-bdc6-48625c11da9f","Type":"ContainerDied","Data":"2b5fc39d7b02d82f2a0866709f1df8f7ca2cf377367c88bdf8a4438d06da3987"} Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.001974 4680 generic.go:334] "Generic (PLEG): container finished" podID="904c1e1e-004a-4ec1-8981-2bcbbbbea7e0" containerID="4fdcbf45c1e97f6a68666baf33df630aaca8de21af46e4058c34e571c69be445" exitCode=0 Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.002029 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-379c-account-create-update-phc6b" event={"ID":"904c1e1e-004a-4ec1-8981-2bcbbbbea7e0","Type":"ContainerDied","Data":"4fdcbf45c1e97f6a68666baf33df630aaca8de21af46e4058c34e571c69be445"} Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.007119 4680 generic.go:334] "Generic (PLEG): container finished" podID="d0686b19-fda5-4f55-88b4-f831de276058" containerID="8c0b9330037ebc4b13964cceaf9af9d288a48b9fb5a2e07c04580bc9834cfd37" exitCode=0 Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.007187 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-988e-account-create-update-5qtck" event={"ID":"d0686b19-fda5-4f55-88b4-f831de276058","Type":"ContainerDied","Data":"8c0b9330037ebc4b13964cceaf9af9d288a48b9fb5a2e07c04580bc9834cfd37"} Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.009008 4680 generic.go:334] "Generic (PLEG): container finished" podID="ff9aefee-2f32-417a-a2e1-4cb309fc6c19" containerID="7bb236f312bb3bfa1c8f9aafa99550ae6f38fbccb021cd18207627d42a061fd4" exitCode=0 Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.009156 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-72aa-account-create-update-ssg2c" event={"ID":"ff9aefee-2f32-417a-a2e1-4cb309fc6c19","Type":"ContainerDied","Data":"7bb236f312bb3bfa1c8f9aafa99550ae6f38fbccb021cd18207627d42a061fd4"} Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.473357 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ck6vb" Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.583956 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ncbx6" Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.615882 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6z5d\" (UniqueName: \"kubernetes.io/projected/46799f27-6bac-4031-9277-7310bf49adbc-kube-api-access-h6z5d\") pod \"46799f27-6bac-4031-9277-7310bf49adbc\" (UID: \"46799f27-6bac-4031-9277-7310bf49adbc\") " Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.616175 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/065d2f14-a873-4ca1-b0aa-6bde51d5aec0-operator-scripts\") pod \"065d2f14-a873-4ca1-b0aa-6bde51d5aec0\" (UID: \"065d2f14-a873-4ca1-b0aa-6bde51d5aec0\") " Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.616334 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6w2z\" (UniqueName: \"kubernetes.io/projected/065d2f14-a873-4ca1-b0aa-6bde51d5aec0-kube-api-access-p6w2z\") pod \"065d2f14-a873-4ca1-b0aa-6bde51d5aec0\" (UID: \"065d2f14-a873-4ca1-b0aa-6bde51d5aec0\") " Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.618877 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065d2f14-a873-4ca1-b0aa-6bde51d5aec0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "065d2f14-a873-4ca1-b0aa-6bde51d5aec0" (UID: "065d2f14-a873-4ca1-b0aa-6bde51d5aec0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.621678 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46799f27-6bac-4031-9277-7310bf49adbc-kube-api-access-h6z5d" (OuterVolumeSpecName: "kube-api-access-h6z5d") pod "46799f27-6bac-4031-9277-7310bf49adbc" (UID: "46799f27-6bac-4031-9277-7310bf49adbc"). InnerVolumeSpecName "kube-api-access-h6z5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.641819 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/065d2f14-a873-4ca1-b0aa-6bde51d5aec0-kube-api-access-p6w2z" (OuterVolumeSpecName: "kube-api-access-p6w2z") pod "065d2f14-a873-4ca1-b0aa-6bde51d5aec0" (UID: "065d2f14-a873-4ca1-b0aa-6bde51d5aec0"). InnerVolumeSpecName "kube-api-access-p6w2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.717808 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46799f27-6bac-4031-9277-7310bf49adbc-operator-scripts\") pod \"46799f27-6bac-4031-9277-7310bf49adbc\" (UID: \"46799f27-6bac-4031-9277-7310bf49adbc\") " Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.718483 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6z5d\" (UniqueName: \"kubernetes.io/projected/46799f27-6bac-4031-9277-7310bf49adbc-kube-api-access-h6z5d\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.718505 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/065d2f14-a873-4ca1-b0aa-6bde51d5aec0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.718518 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6w2z\" (UniqueName: \"kubernetes.io/projected/065d2f14-a873-4ca1-b0aa-6bde51d5aec0-kube-api-access-p6w2z\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.719028 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46799f27-6bac-4031-9277-7310bf49adbc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46799f27-6bac-4031-9277-7310bf49adbc" (UID: "46799f27-6bac-4031-9277-7310bf49adbc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:55 crc kubenswrapper[4680]: I0217 18:01:55.819817 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46799f27-6bac-4031-9277-7310bf49adbc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.033086 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54c53390-eb8a-4a06-9d50-7d0a4f10d3df","Type":"ContainerStarted","Data":"8787ac7e155790d6858a60cc05e7db13a8ea7a46fef66151b678e0922439a8f9"} Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.033138 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54c53390-eb8a-4a06-9d50-7d0a4f10d3df","Type":"ContainerStarted","Data":"1f0612cb6e13cdf4456ac13cff13c41b0ea77952dc6e4310da64c74833359201"} Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.033151 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54c53390-eb8a-4a06-9d50-7d0a4f10d3df","Type":"ContainerStarted","Data":"d5b12297c4db2c697a8c54171bf2dda494bb2b7fd7d7fdb7fa8b42713617a1bd"} Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.033162 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54c53390-eb8a-4a06-9d50-7d0a4f10d3df","Type":"ContainerStarted","Data":"08e5731b52539da2fcfc6554b2288426121ae1df7317eedc0cecd6d1a353126a"} Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.035377 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ncbx6" event={"ID":"46799f27-6bac-4031-9277-7310bf49adbc","Type":"ContainerDied","Data":"c33567530702fb7fdb523c2ee53d21846a5e703d0ca5be213dc0bff35242531c"} Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.035412 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c33567530702fb7fdb523c2ee53d21846a5e703d0ca5be213dc0bff35242531c" Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.035504 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ncbx6" Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.048490 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ck6vb" Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.049617 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ck6vb" event={"ID":"065d2f14-a873-4ca1-b0aa-6bde51d5aec0","Type":"ContainerDied","Data":"0fbd2519f33b7993e8deec324cbf43815b468dd198dc7482be9fdccfd5585b74"} Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.049659 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fbd2519f33b7993e8deec324cbf43815b468dd198dc7482be9fdccfd5585b74" Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.622571 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-72aa-account-create-update-ssg2c" Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.637206 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd6qf\" (UniqueName: \"kubernetes.io/projected/ff9aefee-2f32-417a-a2e1-4cb309fc6c19-kube-api-access-fd6qf\") pod \"ff9aefee-2f32-417a-a2e1-4cb309fc6c19\" (UID: \"ff9aefee-2f32-417a-a2e1-4cb309fc6c19\") " Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.643099 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff9aefee-2f32-417a-a2e1-4cb309fc6c19-operator-scripts\") pod \"ff9aefee-2f32-417a-a2e1-4cb309fc6c19\" (UID: \"ff9aefee-2f32-417a-a2e1-4cb309fc6c19\") " Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.649080 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff9aefee-2f32-417a-a2e1-4cb309fc6c19-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff9aefee-2f32-417a-a2e1-4cb309fc6c19" (UID: "ff9aefee-2f32-417a-a2e1-4cb309fc6c19"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.651854 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff9aefee-2f32-417a-a2e1-4cb309fc6c19-kube-api-access-fd6qf" (OuterVolumeSpecName: "kube-api-access-fd6qf") pod "ff9aefee-2f32-417a-a2e1-4cb309fc6c19" (UID: "ff9aefee-2f32-417a-a2e1-4cb309fc6c19"). InnerVolumeSpecName "kube-api-access-fd6qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.750268 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd6qf\" (UniqueName: \"kubernetes.io/projected/ff9aefee-2f32-417a-a2e1-4cb309fc6c19-kube-api-access-fd6qf\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.750356 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff9aefee-2f32-417a-a2e1-4cb309fc6c19-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.780307 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-379c-account-create-update-phc6b" Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.877049 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlqp2\" (UniqueName: \"kubernetes.io/projected/904c1e1e-004a-4ec1-8981-2bcbbbbea7e0-kube-api-access-vlqp2\") pod \"904c1e1e-004a-4ec1-8981-2bcbbbbea7e0\" (UID: \"904c1e1e-004a-4ec1-8981-2bcbbbbea7e0\") " Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.877115 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/904c1e1e-004a-4ec1-8981-2bcbbbbea7e0-operator-scripts\") pod \"904c1e1e-004a-4ec1-8981-2bcbbbbea7e0\" (UID: \"904c1e1e-004a-4ec1-8981-2bcbbbbea7e0\") " Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.880660 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/904c1e1e-004a-4ec1-8981-2bcbbbbea7e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "904c1e1e-004a-4ec1-8981-2bcbbbbea7e0" (UID: "904c1e1e-004a-4ec1-8981-2bcbbbbea7e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.882122 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/904c1e1e-004a-4ec1-8981-2bcbbbbea7e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.886646 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/904c1e1e-004a-4ec1-8981-2bcbbbbea7e0-kube-api-access-vlqp2" (OuterVolumeSpecName: "kube-api-access-vlqp2") pod "904c1e1e-004a-4ec1-8981-2bcbbbbea7e0" (UID: "904c1e1e-004a-4ec1-8981-2bcbbbbea7e0"). InnerVolumeSpecName "kube-api-access-vlqp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:56 crc kubenswrapper[4680]: I0217 18:01:56.986533 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlqp2\" (UniqueName: \"kubernetes.io/projected/904c1e1e-004a-4ec1-8981-2bcbbbbea7e0-kube-api-access-vlqp2\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.061355 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-988e-account-create-update-5qtck" event={"ID":"d0686b19-fda5-4f55-88b4-f831de276058","Type":"ContainerDied","Data":"3f43492aa63a87440ca28e040c1e2f7acf748abefa9467142e7c573070b4ba4b"} Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.061405 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f43492aa63a87440ca28e040c1e2f7acf748abefa9467142e7c573070b4ba4b" Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.063202 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-72aa-account-create-update-ssg2c" event={"ID":"ff9aefee-2f32-417a-a2e1-4cb309fc6c19","Type":"ContainerDied","Data":"e35c983edcb810092fc3499024cf72ded6281a68148d652e2350b42f074cd35f"} Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.063228 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e35c983edcb810092fc3499024cf72ded6281a68148d652e2350b42f074cd35f" Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.063270 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-72aa-account-create-update-ssg2c" Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.077553 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lvrxv" event={"ID":"8e03ce69-e84a-4e27-bdc6-48625c11da9f","Type":"ContainerDied","Data":"c13c2e0025d4276cf662cb6be4ed9e63cfe47cd250379b4c5e0782fa80585f4f"} Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.077595 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c13c2e0025d4276cf662cb6be4ed9e63cfe47cd250379b4c5e0782fa80585f4f" Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.080939 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-379c-account-create-update-phc6b" event={"ID":"904c1e1e-004a-4ec1-8981-2bcbbbbea7e0","Type":"ContainerDied","Data":"a63104fd8fecf40dcfff89c7d111d2fa8bcaf33b3480f7911c9e4c7b6e9b4fac"} Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.080977 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a63104fd8fecf40dcfff89c7d111d2fa8bcaf33b3480f7911c9e4c7b6e9b4fac" Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.081033 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-379c-account-create-update-phc6b" Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.092681 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-988e-account-create-update-5qtck" Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.095116 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lvrxv" Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.190842 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0686b19-fda5-4f55-88b4-f831de276058-operator-scripts\") pod \"d0686b19-fda5-4f55-88b4-f831de276058\" (UID: \"d0686b19-fda5-4f55-88b4-f831de276058\") " Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.190900 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e03ce69-e84a-4e27-bdc6-48625c11da9f-operator-scripts\") pod \"8e03ce69-e84a-4e27-bdc6-48625c11da9f\" (UID: \"8e03ce69-e84a-4e27-bdc6-48625c11da9f\") " Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.191040 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgwhd\" (UniqueName: \"kubernetes.io/projected/d0686b19-fda5-4f55-88b4-f831de276058-kube-api-access-wgwhd\") pod \"d0686b19-fda5-4f55-88b4-f831de276058\" (UID: \"d0686b19-fda5-4f55-88b4-f831de276058\") " Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.191059 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkznh\" (UniqueName: \"kubernetes.io/projected/8e03ce69-e84a-4e27-bdc6-48625c11da9f-kube-api-access-jkznh\") pod \"8e03ce69-e84a-4e27-bdc6-48625c11da9f\" (UID: \"8e03ce69-e84a-4e27-bdc6-48625c11da9f\") " Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.193280 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0686b19-fda5-4f55-88b4-f831de276058-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0686b19-fda5-4f55-88b4-f831de276058" (UID: "d0686b19-fda5-4f55-88b4-f831de276058"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.194238 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e03ce69-e84a-4e27-bdc6-48625c11da9f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e03ce69-e84a-4e27-bdc6-48625c11da9f" (UID: "8e03ce69-e84a-4e27-bdc6-48625c11da9f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.196741 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e03ce69-e84a-4e27-bdc6-48625c11da9f-kube-api-access-jkznh" (OuterVolumeSpecName: "kube-api-access-jkznh") pod "8e03ce69-e84a-4e27-bdc6-48625c11da9f" (UID: "8e03ce69-e84a-4e27-bdc6-48625c11da9f"). InnerVolumeSpecName "kube-api-access-jkznh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.211871 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0686b19-fda5-4f55-88b4-f831de276058-kube-api-access-wgwhd" (OuterVolumeSpecName: "kube-api-access-wgwhd") pod "d0686b19-fda5-4f55-88b4-f831de276058" (UID: "d0686b19-fda5-4f55-88b4-f831de276058"). InnerVolumeSpecName "kube-api-access-wgwhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.293367 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgwhd\" (UniqueName: \"kubernetes.io/projected/d0686b19-fda5-4f55-88b4-f831de276058-kube-api-access-wgwhd\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.293409 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkznh\" (UniqueName: \"kubernetes.io/projected/8e03ce69-e84a-4e27-bdc6-48625c11da9f-kube-api-access-jkznh\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.293422 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0686b19-fda5-4f55-88b4-f831de276058-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:57 crc kubenswrapper[4680]: I0217 18:01:57.293430 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e03ce69-e84a-4e27-bdc6-48625c11da9f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:01:58 crc kubenswrapper[4680]: I0217 18:01:58.088278 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lvrxv" Feb 17 18:01:58 crc kubenswrapper[4680]: I0217 18:01:58.088284 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-988e-account-create-update-5qtck" Feb 17 18:02:01 crc kubenswrapper[4680]: I0217 18:02:01.138454 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-96c9q" event={"ID":"ce8c3e62-2beb-456a-b77b-386213f4bf98","Type":"ContainerStarted","Data":"bf3202a551f7329f30ec3ef58caf4dfe7277da5c16d3eb54a3debf569b8b910f"} Feb 17 18:02:01 crc kubenswrapper[4680]: I0217 18:02:01.163562 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54c53390-eb8a-4a06-9d50-7d0a4f10d3df","Type":"ContainerStarted","Data":"ddaae0a4e2fe59cf3fe39e2ea2116d137f51778d62168208f13f86d555873bd1"} Feb 17 18:02:01 crc kubenswrapper[4680]: I0217 18:02:01.163608 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54c53390-eb8a-4a06-9d50-7d0a4f10d3df","Type":"ContainerStarted","Data":"1d6bc26b1bc4b0bc8e5b22e8e3269c3596dbeea3cc149603e4b0984f100ed46a"} Feb 17 18:02:01 crc kubenswrapper[4680]: I0217 18:02:01.163617 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54c53390-eb8a-4a06-9d50-7d0a4f10d3df","Type":"ContainerStarted","Data":"c21259d9f20f69e0f15f99428f2ad4bd03af6dcec5814ddc361b4699fd6aff2f"} Feb 17 18:02:01 crc kubenswrapper[4680]: I0217 18:02:01.163625 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54c53390-eb8a-4a06-9d50-7d0a4f10d3df","Type":"ContainerStarted","Data":"a6cd76423ce44e46447623bd3a1db787aa7c6b501efe08a1094790f167e8957e"} Feb 17 18:02:01 crc kubenswrapper[4680]: I0217 18:02:01.175541 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-96c9q" podStartSLOduration=2.744580528 podStartE2EDuration="10.175443413s" podCreationTimestamp="2026-02-17 18:01:51 +0000 UTC" firstStartedPulling="2026-02-17 18:01:53.008512601 +0000 UTC m=+1042.559411280" lastFinishedPulling="2026-02-17 18:02:00.439375486 +0000 UTC m=+1049.990274165" observedRunningTime="2026-02-17 18:02:01.157646324 +0000 UTC m=+1050.708545003" watchObservedRunningTime="2026-02-17 18:02:01.175443413 +0000 UTC m=+1050.726342092" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.175627 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54c53390-eb8a-4a06-9d50-7d0a4f10d3df","Type":"ContainerStarted","Data":"62d781baa9a691c18cf692f07e2a2a8fa9714ef6d4c42e3922870c36730cd64f"} Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.177047 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54c53390-eb8a-4a06-9d50-7d0a4f10d3df","Type":"ContainerStarted","Data":"1c46ccaa534526682c4ecef672acc1e13920833f8a84c47961af0893fcdbea17"} Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.177152 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54c53390-eb8a-4a06-9d50-7d0a4f10d3df","Type":"ContainerStarted","Data":"653f96964bbe3123d1f1828203e85bb5dcf8ad1c5d2a0a7c6fee5fac31f9ba00"} Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.219150 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.948598432 podStartE2EDuration="47.219127233s" podCreationTimestamp="2026-02-17 18:01:15 +0000 UTC" firstStartedPulling="2026-02-17 18:01:50.160132061 +0000 UTC m=+1039.711030740" lastFinishedPulling="2026-02-17 18:02:00.430660862 +0000 UTC m=+1049.981559541" observedRunningTime="2026-02-17 18:02:02.213173576 +0000 UTC m=+1051.764072265" watchObservedRunningTime="2026-02-17 18:02:02.219127233 +0000 UTC m=+1051.770025912" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.597432 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fcrz7"] Feb 17 18:02:02 crc kubenswrapper[4680]: E0217 18:02:02.598188 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9aefee-2f32-417a-a2e1-4cb309fc6c19" containerName="mariadb-account-create-update" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.598290 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9aefee-2f32-417a-a2e1-4cb309fc6c19" containerName="mariadb-account-create-update" Feb 17 18:02:02 crc kubenswrapper[4680]: E0217 18:02:02.598481 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904c1e1e-004a-4ec1-8981-2bcbbbbea7e0" containerName="mariadb-account-create-update" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.598562 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="904c1e1e-004a-4ec1-8981-2bcbbbbea7e0" containerName="mariadb-account-create-update" Feb 17 18:02:02 crc kubenswrapper[4680]: E0217 18:02:02.598651 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e03ce69-e84a-4e27-bdc6-48625c11da9f" containerName="mariadb-database-create" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.598733 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e03ce69-e84a-4e27-bdc6-48625c11da9f" containerName="mariadb-database-create" Feb 17 18:02:02 crc kubenswrapper[4680]: E0217 18:02:02.598810 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46799f27-6bac-4031-9277-7310bf49adbc" containerName="mariadb-database-create" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.598882 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="46799f27-6bac-4031-9277-7310bf49adbc" containerName="mariadb-database-create" Feb 17 18:02:02 crc kubenswrapper[4680]: E0217 18:02:02.598968 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb237184-93a1-48b5-8ae7-bfccb72ddb6c" containerName="mariadb-account-create-update" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.599052 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb237184-93a1-48b5-8ae7-bfccb72ddb6c" containerName="mariadb-account-create-update" Feb 17 18:02:02 crc kubenswrapper[4680]: E0217 18:02:02.599139 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0686b19-fda5-4f55-88b4-f831de276058" containerName="mariadb-account-create-update" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.599216 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0686b19-fda5-4f55-88b4-f831de276058" containerName="mariadb-account-create-update" Feb 17 18:02:02 crc kubenswrapper[4680]: E0217 18:02:02.599304 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065d2f14-a873-4ca1-b0aa-6bde51d5aec0" containerName="mariadb-database-create" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.599447 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="065d2f14-a873-4ca1-b0aa-6bde51d5aec0" containerName="mariadb-database-create" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.599716 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb237184-93a1-48b5-8ae7-bfccb72ddb6c" containerName="mariadb-account-create-update" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.599814 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0686b19-fda5-4f55-88b4-f831de276058" containerName="mariadb-account-create-update" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.599901 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="46799f27-6bac-4031-9277-7310bf49adbc" containerName="mariadb-database-create" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.599978 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="065d2f14-a873-4ca1-b0aa-6bde51d5aec0" containerName="mariadb-database-create" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.600066 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="904c1e1e-004a-4ec1-8981-2bcbbbbea7e0" containerName="mariadb-account-create-update" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.600141 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff9aefee-2f32-417a-a2e1-4cb309fc6c19" containerName="mariadb-account-create-update" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.600218 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e03ce69-e84a-4e27-bdc6-48625c11da9f" containerName="mariadb-database-create" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.601360 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.603244 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.623360 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fcrz7"] Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.649002 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.649113 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.649282 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-dns-svc\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.649353 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-config\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.649388 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s2fn\" (UniqueName: \"kubernetes.io/projected/023f6cd4-af35-4af3-8588-fdd4a49dc658-kube-api-access-7s2fn\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.649418 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.750793 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-dns-svc\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.750887 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-config\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.750923 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s2fn\" (UniqueName: \"kubernetes.io/projected/023f6cd4-af35-4af3-8588-fdd4a49dc658-kube-api-access-7s2fn\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.750968 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.751029 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.751084 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.752247 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-config\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.752253 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-dns-svc\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.752372 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.752507 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.752527 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.773966 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s2fn\" (UniqueName: \"kubernetes.io/projected/023f6cd4-af35-4af3-8588-fdd4a49dc658-kube-api-access-7s2fn\") pod \"dnsmasq-dns-764c5664d7-fcrz7\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:02 crc kubenswrapper[4680]: I0217 18:02:02.927588 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:03 crc kubenswrapper[4680]: I0217 18:02:03.186671 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fcrz7"] Feb 17 18:02:03 crc kubenswrapper[4680]: W0217 18:02:03.190777 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod023f6cd4_af35_4af3_8588_fdd4a49dc658.slice/crio-182aecb194adfd7a1680b1264e870c9da5c6da33d10225d9ea380e21078ff183 WatchSource:0}: Error finding container 182aecb194adfd7a1680b1264e870c9da5c6da33d10225d9ea380e21078ff183: Status 404 returned error can't find the container with id 182aecb194adfd7a1680b1264e870c9da5c6da33d10225d9ea380e21078ff183 Feb 17 18:02:03 crc kubenswrapper[4680]: I0217 18:02:03.193520 4680 generic.go:334] "Generic (PLEG): container finished" podID="4c1a34b3-87e0-4584-8a46-18320fe88a7e" containerID="6fb39025c05e3edcf45598fd9c8259e9e07bfa52f700036a907cc2c3b7c577cc" exitCode=0 Feb 17 18:02:03 crc kubenswrapper[4680]: I0217 18:02:03.194404 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dnfft" event={"ID":"4c1a34b3-87e0-4584-8a46-18320fe88a7e","Type":"ContainerDied","Data":"6fb39025c05e3edcf45598fd9c8259e9e07bfa52f700036a907cc2c3b7c577cc"} Feb 17 18:02:04 crc kubenswrapper[4680]: I0217 18:02:04.203393 4680 generic.go:334] "Generic (PLEG): container finished" podID="023f6cd4-af35-4af3-8588-fdd4a49dc658" containerID="1e2d7da058b2a96e355c455398552f86c3edbb2e67dca4d5e7030eb7354a8dd0" exitCode=0 Feb 17 18:02:04 crc kubenswrapper[4680]: I0217 18:02:04.203473 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" event={"ID":"023f6cd4-af35-4af3-8588-fdd4a49dc658","Type":"ContainerDied","Data":"1e2d7da058b2a96e355c455398552f86c3edbb2e67dca4d5e7030eb7354a8dd0"} Feb 17 18:02:04 crc kubenswrapper[4680]: I0217 18:02:04.203983 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" event={"ID":"023f6cd4-af35-4af3-8588-fdd4a49dc658","Type":"ContainerStarted","Data":"182aecb194adfd7a1680b1264e870c9da5c6da33d10225d9ea380e21078ff183"} Feb 17 18:02:04 crc kubenswrapper[4680]: I0217 18:02:04.602415 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dnfft" Feb 17 18:02:04 crc kubenswrapper[4680]: I0217 18:02:04.783550 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-combined-ca-bundle\") pod \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\" (UID: \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\") " Feb 17 18:02:04 crc kubenswrapper[4680]: I0217 18:02:04.783603 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-db-sync-config-data\") pod \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\" (UID: \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\") " Feb 17 18:02:04 crc kubenswrapper[4680]: I0217 18:02:04.783689 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n46ll\" (UniqueName: \"kubernetes.io/projected/4c1a34b3-87e0-4584-8a46-18320fe88a7e-kube-api-access-n46ll\") pod \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\" (UID: \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\") " Feb 17 18:02:04 crc kubenswrapper[4680]: I0217 18:02:04.783812 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-config-data\") pod \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\" (UID: \"4c1a34b3-87e0-4584-8a46-18320fe88a7e\") " Feb 17 18:02:04 crc kubenswrapper[4680]: I0217 18:02:04.790622 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c1a34b3-87e0-4584-8a46-18320fe88a7e-kube-api-access-n46ll" (OuterVolumeSpecName: "kube-api-access-n46ll") pod "4c1a34b3-87e0-4584-8a46-18320fe88a7e" (UID: "4c1a34b3-87e0-4584-8a46-18320fe88a7e"). InnerVolumeSpecName "kube-api-access-n46ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:02:04 crc kubenswrapper[4680]: I0217 18:02:04.791716 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4c1a34b3-87e0-4584-8a46-18320fe88a7e" (UID: "4c1a34b3-87e0-4584-8a46-18320fe88a7e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:04 crc kubenswrapper[4680]: I0217 18:02:04.818663 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c1a34b3-87e0-4584-8a46-18320fe88a7e" (UID: "4c1a34b3-87e0-4584-8a46-18320fe88a7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:04 crc kubenswrapper[4680]: I0217 18:02:04.849820 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-config-data" (OuterVolumeSpecName: "config-data") pod "4c1a34b3-87e0-4584-8a46-18320fe88a7e" (UID: "4c1a34b3-87e0-4584-8a46-18320fe88a7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:04 crc kubenswrapper[4680]: I0217 18:02:04.885785 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n46ll\" (UniqueName: \"kubernetes.io/projected/4c1a34b3-87e0-4584-8a46-18320fe88a7e-kube-api-access-n46ll\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:04 crc kubenswrapper[4680]: I0217 18:02:04.885833 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:04 crc kubenswrapper[4680]: I0217 18:02:04.885846 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:04 crc kubenswrapper[4680]: I0217 18:02:04.885857 4680 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c1a34b3-87e0-4584-8a46-18320fe88a7e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.215308 4680 generic.go:334] "Generic (PLEG): container finished" podID="ce8c3e62-2beb-456a-b77b-386213f4bf98" containerID="bf3202a551f7329f30ec3ef58caf4dfe7277da5c16d3eb54a3debf569b8b910f" exitCode=0 Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.215400 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-96c9q" event={"ID":"ce8c3e62-2beb-456a-b77b-386213f4bf98","Type":"ContainerDied","Data":"bf3202a551f7329f30ec3ef58caf4dfe7277da5c16d3eb54a3debf569b8b910f"} Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.225948 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dnfft" event={"ID":"4c1a34b3-87e0-4584-8a46-18320fe88a7e","Type":"ContainerDied","Data":"fe3bf54f77bd051766bc67bafe3c605b50e29e1f5400474207c028d85d06b12a"} Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.225986 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe3bf54f77bd051766bc67bafe3c605b50e29e1f5400474207c028d85d06b12a" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.226034 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dnfft" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.239902 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" event={"ID":"023f6cd4-af35-4af3-8588-fdd4a49dc658","Type":"ContainerStarted","Data":"b50bdb7fd8ee4e85f0378f01560f28ad2818cbf201c362bb1321b65e7c785de7"} Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.240976 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.267136 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" podStartSLOduration=3.267113375 podStartE2EDuration="3.267113375s" podCreationTimestamp="2026-02-17 18:02:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:02:05.260403805 +0000 UTC m=+1054.811302504" watchObservedRunningTime="2026-02-17 18:02:05.267113375 +0000 UTC m=+1054.818012054" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.589282 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.589353 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.611615 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fcrz7"] Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.645095 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-687dh"] Feb 17 18:02:05 crc kubenswrapper[4680]: E0217 18:02:05.645456 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1a34b3-87e0-4584-8a46-18320fe88a7e" containerName="glance-db-sync" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.645475 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1a34b3-87e0-4584-8a46-18320fe88a7e" containerName="glance-db-sync" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.645635 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1a34b3-87e0-4584-8a46-18320fe88a7e" containerName="glance-db-sync" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.646394 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.696236 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-687dh"] Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.800945 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.801852 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.802009 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.802133 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m2mz\" (UniqueName: \"kubernetes.io/projected/45501435-01c6-448e-84ec-3fee833f0274-kube-api-access-6m2mz\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.802272 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-config\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.802442 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.904137 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.904243 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.904308 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.904364 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m2mz\" (UniqueName: \"kubernetes.io/projected/45501435-01c6-448e-84ec-3fee833f0274-kube-api-access-6m2mz\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.904418 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-config\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.904441 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.905195 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.905257 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.905871 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.906211 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-config\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.906587 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.932293 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m2mz\" (UniqueName: \"kubernetes.io/projected/45501435-01c6-448e-84ec-3fee833f0274-kube-api-access-6m2mz\") pod \"dnsmasq-dns-74f6bcbc87-687dh\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:05 crc kubenswrapper[4680]: I0217 18:02:05.960851 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:06 crc kubenswrapper[4680]: I0217 18:02:06.435106 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-687dh"] Feb 17 18:02:06 crc kubenswrapper[4680]: I0217 18:02:06.562260 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-96c9q" Feb 17 18:02:06 crc kubenswrapper[4680]: I0217 18:02:06.716223 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw2jj\" (UniqueName: \"kubernetes.io/projected/ce8c3e62-2beb-456a-b77b-386213f4bf98-kube-api-access-qw2jj\") pod \"ce8c3e62-2beb-456a-b77b-386213f4bf98\" (UID: \"ce8c3e62-2beb-456a-b77b-386213f4bf98\") " Feb 17 18:02:06 crc kubenswrapper[4680]: I0217 18:02:06.716577 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce8c3e62-2beb-456a-b77b-386213f4bf98-config-data\") pod \"ce8c3e62-2beb-456a-b77b-386213f4bf98\" (UID: \"ce8c3e62-2beb-456a-b77b-386213f4bf98\") " Feb 17 18:02:06 crc kubenswrapper[4680]: I0217 18:02:06.716633 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce8c3e62-2beb-456a-b77b-386213f4bf98-combined-ca-bundle\") pod \"ce8c3e62-2beb-456a-b77b-386213f4bf98\" (UID: \"ce8c3e62-2beb-456a-b77b-386213f4bf98\") " Feb 17 18:02:06 crc kubenswrapper[4680]: I0217 18:02:06.721118 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce8c3e62-2beb-456a-b77b-386213f4bf98-kube-api-access-qw2jj" (OuterVolumeSpecName: "kube-api-access-qw2jj") pod "ce8c3e62-2beb-456a-b77b-386213f4bf98" (UID: "ce8c3e62-2beb-456a-b77b-386213f4bf98"). InnerVolumeSpecName "kube-api-access-qw2jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:02:06 crc kubenswrapper[4680]: I0217 18:02:06.742611 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce8c3e62-2beb-456a-b77b-386213f4bf98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce8c3e62-2beb-456a-b77b-386213f4bf98" (UID: "ce8c3e62-2beb-456a-b77b-386213f4bf98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:06 crc kubenswrapper[4680]: I0217 18:02:06.766492 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce8c3e62-2beb-456a-b77b-386213f4bf98-config-data" (OuterVolumeSpecName: "config-data") pod "ce8c3e62-2beb-456a-b77b-386213f4bf98" (UID: "ce8c3e62-2beb-456a-b77b-386213f4bf98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:06 crc kubenswrapper[4680]: I0217 18:02:06.818451 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw2jj\" (UniqueName: \"kubernetes.io/projected/ce8c3e62-2beb-456a-b77b-386213f4bf98-kube-api-access-qw2jj\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:06 crc kubenswrapper[4680]: I0217 18:02:06.818484 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce8c3e62-2beb-456a-b77b-386213f4bf98-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:06 crc kubenswrapper[4680]: I0217 18:02:06.818493 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce8c3e62-2beb-456a-b77b-386213f4bf98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.264708 4680 generic.go:334] "Generic (PLEG): container finished" podID="45501435-01c6-448e-84ec-3fee833f0274" containerID="9613b672e030a0c261ed676a0916da714ad4f13a44d258e27365a3dffa9d3145" exitCode=0 Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.264791 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" event={"ID":"45501435-01c6-448e-84ec-3fee833f0274","Type":"ContainerDied","Data":"9613b672e030a0c261ed676a0916da714ad4f13a44d258e27365a3dffa9d3145"} Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.264825 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" event={"ID":"45501435-01c6-448e-84ec-3fee833f0274","Type":"ContainerStarted","Data":"73a3c8cf3bf76ea5c3b26b6f8eb1913aea715d02b26e1b4078f351cd1121011b"} Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.276357 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" podUID="023f6cd4-af35-4af3-8588-fdd4a49dc658" containerName="dnsmasq-dns" containerID="cri-o://b50bdb7fd8ee4e85f0378f01560f28ad2818cbf201c362bb1321b65e7c785de7" gracePeriod=10 Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.276497 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-96c9q" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.277164 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-96c9q" event={"ID":"ce8c3e62-2beb-456a-b77b-386213f4bf98","Type":"ContainerDied","Data":"045ca1c7647c6eefe83e7c9e0212e3ff9225304fc2475d14bece280f8575d081"} Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.277186 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="045ca1c7647c6eefe83e7c9e0212e3ff9225304fc2475d14bece280f8575d081" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.567721 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dkvms"] Feb 17 18:02:07 crc kubenswrapper[4680]: E0217 18:02:07.568556 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce8c3e62-2beb-456a-b77b-386213f4bf98" containerName="keystone-db-sync" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.568575 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce8c3e62-2beb-456a-b77b-386213f4bf98" containerName="keystone-db-sync" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.568791 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce8c3e62-2beb-456a-b77b-386213f4bf98" containerName="keystone-db-sync" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.569483 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.573342 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fww75" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.573569 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.573714 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.573922 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.574171 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.598522 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-687dh"] Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.613438 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dkvms"] Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.659427 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-4xhkb"] Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.661155 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.684039 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-4xhkb"] Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.755498 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-679kk\" (UniqueName: \"kubernetes.io/projected/da3851c2-401e-446e-8677-e9fb12b8fd9f-kube-api-access-679kk\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.755600 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.755637 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-config\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.755675 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.755715 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.755749 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-scripts\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.755787 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-combined-ca-bundle\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.755834 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-dns-svc\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.755858 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-credential-keys\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.755905 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-fernet-keys\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.755959 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-config-data\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.756001 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh6wg\" (UniqueName: \"kubernetes.io/projected/a17525b6-7d49-463e-9b5a-a8e0da0912ea-kube-api-access-wh6wg\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.858257 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.858309 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-config\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.858358 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.858394 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.858417 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-scripts\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.858502 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-combined-ca-bundle\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.858531 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-dns-svc\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.858549 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-credential-keys\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.858567 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-fernet-keys\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.858608 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-config-data\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.858635 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh6wg\" (UniqueName: \"kubernetes.io/projected/a17525b6-7d49-463e-9b5a-a8e0da0912ea-kube-api-access-wh6wg\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.858667 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-679kk\" (UniqueName: \"kubernetes.io/projected/da3851c2-401e-446e-8677-e9fb12b8fd9f-kube-api-access-679kk\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.859572 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-config\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.859975 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-dns-svc\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.863088 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.866414 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.867726 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.872333 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-config-data\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.873210 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-credential-keys\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.879008 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-combined-ca-bundle\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.885724 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-scripts\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.885995 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-fernet-keys\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.908779 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-679kk\" (UniqueName: \"kubernetes.io/projected/da3851c2-401e-446e-8677-e9fb12b8fd9f-kube-api-access-679kk\") pod \"keystone-bootstrap-dkvms\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.934413 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7bfbfb7db7-m428m"] Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.936002 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.941921 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.942280 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-lmt8x" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.942487 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.942593 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.952666 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh6wg\" (UniqueName: \"kubernetes.io/projected/a17525b6-7d49-463e-9b5a-a8e0da0912ea-kube-api-access-wh6wg\") pod \"dnsmasq-dns-847c4cc679-4xhkb\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.965854 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d39666f-eb25-414e-a508-4b437b45254f-horizon-secret-key\") pod \"horizon-7bfbfb7db7-m428m\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.965897 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d39666f-eb25-414e-a508-4b437b45254f-scripts\") pod \"horizon-7bfbfb7db7-m428m\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.965975 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqsz5\" (UniqueName: \"kubernetes.io/projected/2d39666f-eb25-414e-a508-4b437b45254f-kube-api-access-wqsz5\") pod \"horizon-7bfbfb7db7-m428m\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.966042 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d39666f-eb25-414e-a508-4b437b45254f-logs\") pod \"horizon-7bfbfb7db7-m428m\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.966077 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d39666f-eb25-414e-a508-4b437b45254f-config-data\") pod \"horizon-7bfbfb7db7-m428m\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:07 crc kubenswrapper[4680]: I0217 18:02:07.970085 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-9458w"] Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.005062 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.017506 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.048473 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.048715 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pzc2g" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.048859 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.067664 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d39666f-eb25-414e-a508-4b437b45254f-config-data\") pod \"horizon-7bfbfb7db7-m428m\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.067716 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f869168f-8063-46a6-b574-d8be569ac733-etc-machine-id\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.067742 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-combined-ca-bundle\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.067782 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d39666f-eb25-414e-a508-4b437b45254f-horizon-secret-key\") pod \"horizon-7bfbfb7db7-m428m\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.067814 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d39666f-eb25-414e-a508-4b437b45254f-scripts\") pod \"horizon-7bfbfb7db7-m428m\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.067868 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-config-data\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.067902 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-db-sync-config-data\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.067962 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-scripts\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.067982 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtxmg\" (UniqueName: \"kubernetes.io/projected/f869168f-8063-46a6-b574-d8be569ac733-kube-api-access-jtxmg\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.068061 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqsz5\" (UniqueName: \"kubernetes.io/projected/2d39666f-eb25-414e-a508-4b437b45254f-kube-api-access-wqsz5\") pod \"horizon-7bfbfb7db7-m428m\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.068117 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d39666f-eb25-414e-a508-4b437b45254f-logs\") pod \"horizon-7bfbfb7db7-m428m\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.068658 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d39666f-eb25-414e-a508-4b437b45254f-logs\") pod \"horizon-7bfbfb7db7-m428m\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.069743 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d39666f-eb25-414e-a508-4b437b45254f-config-data\") pod \"horizon-7bfbfb7db7-m428m\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.080669 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bfbfb7db7-m428m"] Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.081481 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d39666f-eb25-414e-a508-4b437b45254f-scripts\") pod \"horizon-7bfbfb7db7-m428m\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.107893 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d39666f-eb25-414e-a508-4b437b45254f-horizon-secret-key\") pod \"horizon-7bfbfb7db7-m428m\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.108088 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9458w"] Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.170007 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqsz5\" (UniqueName: \"kubernetes.io/projected/2d39666f-eb25-414e-a508-4b437b45254f-kube-api-access-wqsz5\") pod \"horizon-7bfbfb7db7-m428m\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.171377 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f869168f-8063-46a6-b574-d8be569ac733-etc-machine-id\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.203460 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-combined-ca-bundle\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.203881 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-config-data\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.204022 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-db-sync-config-data\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.204133 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-scripts\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.204219 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtxmg\" (UniqueName: \"kubernetes.io/projected/f869168f-8063-46a6-b574-d8be569ac733-kube-api-access-jtxmg\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.171444 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f869168f-8063-46a6-b574-d8be569ac733-etc-machine-id\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.246919 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-config-data\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.248139 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.269560 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-combined-ca-bundle\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.287833 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-bn2nz"] Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.290448 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-scripts\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.290875 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-db-sync-config-data\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.304193 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bn2nz" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.316882 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.317659 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.330722 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cvxzz" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.350400 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bn2nz"] Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.356168 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtxmg\" (UniqueName: \"kubernetes.io/projected/f869168f-8063-46a6-b574-d8be569ac733-kube-api-access-jtxmg\") pod \"cinder-db-sync-9458w\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.387435 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.416272 4680 generic.go:334] "Generic (PLEG): container finished" podID="023f6cd4-af35-4af3-8588-fdd4a49dc658" containerID="b50bdb7fd8ee4e85f0378f01560f28ad2818cbf201c362bb1321b65e7c785de7" exitCode=0 Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.416347 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" event={"ID":"023f6cd4-af35-4af3-8588-fdd4a49dc658","Type":"ContainerDied","Data":"b50bdb7fd8ee4e85f0378f01560f28ad2818cbf201c362bb1321b65e7c785de7"} Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.444832 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-combined-ca-bundle\") pod \"neutron-db-sync-bn2nz\" (UID: \"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc\") " pod="openstack/neutron-db-sync-bn2nz" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.444875 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6wx6\" (UniqueName: \"kubernetes.io/projected/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-kube-api-access-l6wx6\") pod \"neutron-db-sync-bn2nz\" (UID: \"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc\") " pod="openstack/neutron-db-sync-bn2nz" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.445167 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-config\") pod \"neutron-db-sync-bn2nz\" (UID: \"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc\") " pod="openstack/neutron-db-sync-bn2nz" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.473376 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-76rb8"] Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.474780 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-76rb8" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.489095 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4jzcp" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.489375 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.489267 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.489461 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.491026 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.513221 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.513434 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.513554 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-429lj" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.537149 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-76rb8"] Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.556218 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-4xhkb"] Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.559533 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc2t5\" (UniqueName: \"kubernetes.io/projected/b2c53952-8df2-4f30-ae29-5778822cae38-kube-api-access-hc2t5\") pod \"placement-db-sync-76rb8\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " pod="openstack/placement-db-sync-76rb8" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.560786 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-scripts\") pod \"placement-db-sync-76rb8\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " pod="openstack/placement-db-sync-76rb8" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.560918 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2c53952-8df2-4f30-ae29-5778822cae38-logs\") pod \"placement-db-sync-76rb8\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " pod="openstack/placement-db-sync-76rb8" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.561032 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.561556 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ppts\" (UniqueName: \"kubernetes.io/projected/dfcfa220-e137-42fd-a4ab-8b1682720cb1-kube-api-access-4ppts\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.561749 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-scripts\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.561938 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-config\") pod \"neutron-db-sync-bn2nz\" (UID: \"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc\") " pod="openstack/neutron-db-sync-bn2nz" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.562070 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-config-data\") pod \"placement-db-sync-76rb8\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " pod="openstack/placement-db-sync-76rb8" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.562182 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-combined-ca-bundle\") pod \"placement-db-sync-76rb8\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " pod="openstack/placement-db-sync-76rb8" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.562292 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfcfa220-e137-42fd-a4ab-8b1682720cb1-logs\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.562416 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfcfa220-e137-42fd-a4ab-8b1682720cb1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.562526 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.562695 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-config-data\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.562889 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-combined-ca-bundle\") pod \"neutron-db-sync-bn2nz\" (UID: \"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc\") " pod="openstack/neutron-db-sync-bn2nz" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.563001 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6wx6\" (UniqueName: \"kubernetes.io/projected/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-kube-api-access-l6wx6\") pod \"neutron-db-sync-bn2nz\" (UID: \"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc\") " pod="openstack/neutron-db-sync-bn2nz" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.564687 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-477hc"] Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.565792 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-477hc" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.584527 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.584787 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-np774" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.597143 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9458w" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.603400 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-477hc"] Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.642026 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-config\") pod \"neutron-db-sync-bn2nz\" (UID: \"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc\") " pod="openstack/neutron-db-sync-bn2nz" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.642589 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-combined-ca-bundle\") pod \"neutron-db-sync-bn2nz\" (UID: \"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc\") " pod="openstack/neutron-db-sync-bn2nz" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.668284 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7ffb7b86bf-9ttrt"] Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.671157 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a49d620c-c0c2-41cb-b458-cb67351334ad-combined-ca-bundle\") pod \"barbican-db-sync-477hc\" (UID: \"a49d620c-c0c2-41cb-b458-cb67351334ad\") " pod="openstack/barbican-db-sync-477hc" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.679531 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-scripts\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.679657 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-config-data\") pod \"placement-db-sync-76rb8\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " pod="openstack/placement-db-sync-76rb8" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.679692 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-combined-ca-bundle\") pod \"placement-db-sync-76rb8\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " pod="openstack/placement-db-sync-76rb8" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.679732 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfcfa220-e137-42fd-a4ab-8b1682720cb1-logs\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.679763 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfcfa220-e137-42fd-a4ab-8b1682720cb1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.679792 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.679919 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-config-data\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.679980 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a49d620c-c0c2-41cb-b458-cb67351334ad-db-sync-config-data\") pod \"barbican-db-sync-477hc\" (UID: \"a49d620c-c0c2-41cb-b458-cb67351334ad\") " pod="openstack/barbican-db-sync-477hc" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.680008 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpgx4\" (UniqueName: \"kubernetes.io/projected/a49d620c-c0c2-41cb-b458-cb67351334ad-kube-api-access-wpgx4\") pod \"barbican-db-sync-477hc\" (UID: \"a49d620c-c0c2-41cb-b458-cb67351334ad\") " pod="openstack/barbican-db-sync-477hc" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.680130 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc2t5\" (UniqueName: \"kubernetes.io/projected/b2c53952-8df2-4f30-ae29-5778822cae38-kube-api-access-hc2t5\") pod \"placement-db-sync-76rb8\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " pod="openstack/placement-db-sync-76rb8" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.680166 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-scripts\") pod \"placement-db-sync-76rb8\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " pod="openstack/placement-db-sync-76rb8" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.680203 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2c53952-8df2-4f30-ae29-5778822cae38-logs\") pod \"placement-db-sync-76rb8\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " pod="openstack/placement-db-sync-76rb8" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.680239 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.680289 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ppts\" (UniqueName: \"kubernetes.io/projected/dfcfa220-e137-42fd-a4ab-8b1682720cb1-kube-api-access-4ppts\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.680582 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.682466 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfcfa220-e137-42fd-a4ab-8b1682720cb1-logs\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.695462 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-config-data\") pod \"placement-db-sync-76rb8\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " pod="openstack/placement-db-sync-76rb8" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.695882 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfcfa220-e137-42fd-a4ab-8b1682720cb1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.696602 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2c53952-8df2-4f30-ae29-5778822cae38-logs\") pod \"placement-db-sync-76rb8\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " pod="openstack/placement-db-sync-76rb8" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.696896 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-combined-ca-bundle\") pod \"placement-db-sync-76rb8\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " pod="openstack/placement-db-sync-76rb8" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.697368 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.697855 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.699558 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-config-data\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.735118 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-scripts\") pod \"placement-db-sync-76rb8\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " pod="openstack/placement-db-sync-76rb8" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.742661 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gvq9l"] Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.744078 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.748107 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6wx6\" (UniqueName: \"kubernetes.io/projected/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-kube-api-access-l6wx6\") pod \"neutron-db-sync-bn2nz\" (UID: \"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc\") " pod="openstack/neutron-db-sync-bn2nz" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.753192 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-scripts\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.759886 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ppts\" (UniqueName: \"kubernetes.io/projected/dfcfa220-e137-42fd-a4ab-8b1682720cb1-kube-api-access-4ppts\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.783394 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a49d620c-c0c2-41cb-b458-cb67351334ad-combined-ca-bundle\") pod \"barbican-db-sync-477hc\" (UID: \"a49d620c-c0c2-41cb-b458-cb67351334ad\") " pod="openstack/barbican-db-sync-477hc" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.783437 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.783477 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6f610194-e55a-4fe1-821a-23921b314518-horizon-secret-key\") pod \"horizon-7ffb7b86bf-9ttrt\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.783498 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-config\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.783536 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f610194-e55a-4fe1-821a-23921b314518-config-data\") pod \"horizon-7ffb7b86bf-9ttrt\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.783575 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f610194-e55a-4fe1-821a-23921b314518-scripts\") pod \"horizon-7ffb7b86bf-9ttrt\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.783609 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a49d620c-c0c2-41cb-b458-cb67351334ad-db-sync-config-data\") pod \"barbican-db-sync-477hc\" (UID: \"a49d620c-c0c2-41cb-b458-cb67351334ad\") " pod="openstack/barbican-db-sync-477hc" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.783626 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpgx4\" (UniqueName: \"kubernetes.io/projected/a49d620c-c0c2-41cb-b458-cb67351334ad-kube-api-access-wpgx4\") pod \"barbican-db-sync-477hc\" (UID: \"a49d620c-c0c2-41cb-b458-cb67351334ad\") " pod="openstack/barbican-db-sync-477hc" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.783653 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlmfp\" (UniqueName: \"kubernetes.io/projected/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-kube-api-access-wlmfp\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.783691 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.783714 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvdmv\" (UniqueName: \"kubernetes.io/projected/6f610194-e55a-4fe1-821a-23921b314518-kube-api-access-vvdmv\") pod \"horizon-7ffb7b86bf-9ttrt\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.783741 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.783764 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f610194-e55a-4fe1-821a-23921b314518-logs\") pod \"horizon-7ffb7b86bf-9ttrt\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.783780 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.787605 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7ffb7b86bf-9ttrt"] Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.797612 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a49d620c-c0c2-41cb-b458-cb67351334ad-db-sync-config-data\") pod \"barbican-db-sync-477hc\" (UID: \"a49d620c-c0c2-41cb-b458-cb67351334ad\") " pod="openstack/barbican-db-sync-477hc" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.798210 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.798229 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc2t5\" (UniqueName: \"kubernetes.io/projected/b2c53952-8df2-4f30-ae29-5778822cae38-kube-api-access-hc2t5\") pod \"placement-db-sync-76rb8\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " pod="openstack/placement-db-sync-76rb8" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.809472 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a49d620c-c0c2-41cb-b458-cb67351334ad-combined-ca-bundle\") pod \"barbican-db-sync-477hc\" (UID: \"a49d620c-c0c2-41cb-b458-cb67351334ad\") " pod="openstack/barbican-db-sync-477hc" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.821830 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-76rb8" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.834873 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.854574 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpgx4\" (UniqueName: \"kubernetes.io/projected/a49d620c-c0c2-41cb-b458-cb67351334ad-kube-api-access-wpgx4\") pod \"barbican-db-sync-477hc\" (UID: \"a49d620c-c0c2-41cb-b458-cb67351334ad\") " pod="openstack/barbican-db-sync-477hc" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.860472 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.867799 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.868280 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.883192 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.884951 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6f610194-e55a-4fe1-821a-23921b314518-horizon-secret-key\") pod \"horizon-7ffb7b86bf-9ttrt\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.884993 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-config\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.885031 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f610194-e55a-4fe1-821a-23921b314518-config-data\") pod \"horizon-7ffb7b86bf-9ttrt\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.885067 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f610194-e55a-4fe1-821a-23921b314518-scripts\") pod \"horizon-7ffb7b86bf-9ttrt\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.885116 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlmfp\" (UniqueName: \"kubernetes.io/projected/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-kube-api-access-wlmfp\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.885151 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.885173 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvdmv\" (UniqueName: \"kubernetes.io/projected/6f610194-e55a-4fe1-821a-23921b314518-kube-api-access-vvdmv\") pod \"horizon-7ffb7b86bf-9ttrt\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.885198 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.885223 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f610194-e55a-4fe1-821a-23921b314518-logs\") pod \"horizon-7ffb7b86bf-9ttrt\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.885241 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.885269 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.886191 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.888663 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f610194-e55a-4fe1-821a-23921b314518-config-data\") pod \"horizon-7ffb7b86bf-9ttrt\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.889284 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-config\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.889945 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.890640 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.890683 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f610194-e55a-4fe1-821a-23921b314518-scripts\") pod \"horizon-7ffb7b86bf-9ttrt\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.890947 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f610194-e55a-4fe1-821a-23921b314518-logs\") pod \"horizon-7ffb7b86bf-9ttrt\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.902660 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.903950 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.909502 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.909641 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6f610194-e55a-4fe1-821a-23921b314518-horizon-secret-key\") pod \"horizon-7ffb7b86bf-9ttrt\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.929732 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlmfp\" (UniqueName: \"kubernetes.io/projected/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-kube-api-access-wlmfp\") pod \"dnsmasq-dns-785d8bcb8c-gvq9l\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.932673 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-477hc" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.967199 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvdmv\" (UniqueName: \"kubernetes.io/projected/6f610194-e55a-4fe1-821a-23921b314518-kube-api-access-vvdmv\") pod \"horizon-7ffb7b86bf-9ttrt\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.988515 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bn2nz" Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.989799 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-ovsdbserver-sb\") pod \"023f6cd4-af35-4af3-8588-fdd4a49dc658\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.989848 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s2fn\" (UniqueName: \"kubernetes.io/projected/023f6cd4-af35-4af3-8588-fdd4a49dc658-kube-api-access-7s2fn\") pod \"023f6cd4-af35-4af3-8588-fdd4a49dc658\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.989889 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-config\") pod \"023f6cd4-af35-4af3-8588-fdd4a49dc658\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.989954 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-dns-swift-storage-0\") pod \"023f6cd4-af35-4af3-8588-fdd4a49dc658\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.990042 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-dns-svc\") pod \"023f6cd4-af35-4af3-8588-fdd4a49dc658\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " Feb 17 18:02:08 crc kubenswrapper[4680]: I0217 18:02:08.990157 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-ovsdbserver-nb\") pod \"023f6cd4-af35-4af3-8588-fdd4a49dc658\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.000310 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.000390 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.000593 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.005809 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b0b1bd6-c80f-499f-9eb8-3038c2353027-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.005849 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0b1bd6-c80f-499f-9eb8-3038c2353027-logs\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.005894 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbnfh\" (UniqueName: \"kubernetes.io/projected/5b0b1bd6-c80f-499f-9eb8-3038c2353027-kube-api-access-lbnfh\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.005964 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.012832 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gvq9l"] Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.015309 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.111533 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b0b1bd6-c80f-499f-9eb8-3038c2353027-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.111827 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0b1bd6-c80f-499f-9eb8-3038c2353027-logs\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.111852 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbnfh\" (UniqueName: \"kubernetes.io/projected/5b0b1bd6-c80f-499f-9eb8-3038c2353027-kube-api-access-lbnfh\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.111885 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.111922 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.111938 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.112002 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.113365 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b0b1bd6-c80f-499f-9eb8-3038c2353027-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.113657 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0b1bd6-c80f-499f-9eb8-3038c2353027-logs\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.113916 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.140027 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.195561 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbnfh\" (UniqueName: \"kubernetes.io/projected/5b0b1bd6-c80f-499f-9eb8-3038c2353027-kube-api-access-lbnfh\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.266805 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.268670 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/023f6cd4-af35-4af3-8588-fdd4a49dc658-kube-api-access-7s2fn" (OuterVolumeSpecName: "kube-api-access-7s2fn") pod "023f6cd4-af35-4af3-8588-fdd4a49dc658" (UID: "023f6cd4-af35-4af3-8588-fdd4a49dc658"). InnerVolumeSpecName "kube-api-access-7s2fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.328605 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.346493 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s2fn\" (UniqueName: \"kubernetes.io/projected/023f6cd4-af35-4af3-8588-fdd4a49dc658-kube-api-access-7s2fn\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.380560 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.393894 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "023f6cd4-af35-4af3-8588-fdd4a49dc658" (UID: "023f6cd4-af35-4af3-8588-fdd4a49dc658"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.417677 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "023f6cd4-af35-4af3-8588-fdd4a49dc658" (UID: "023f6cd4-af35-4af3-8588-fdd4a49dc658"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.420133 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "023f6cd4-af35-4af3-8588-fdd4a49dc658" (UID: "023f6cd4-af35-4af3-8588-fdd4a49dc658"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.443601 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" event={"ID":"023f6cd4-af35-4af3-8588-fdd4a49dc658","Type":"ContainerDied","Data":"182aecb194adfd7a1680b1264e870c9da5c6da33d10225d9ea380e21078ff183"} Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.444049 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-fcrz7" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.444427 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.444709 4680 scope.go:117] "RemoveContainer" containerID="b50bdb7fd8ee4e85f0378f01560f28ad2818cbf201c362bb1321b65e7c785de7" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.471203 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.471255 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.471269 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.491385 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" event={"ID":"45501435-01c6-448e-84ec-3fee833f0274","Type":"ContainerStarted","Data":"441f5d9cb435a9c62273c3127151021871fdd43204a69b9959f338cc2a9f84bb"} Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.491611 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" podUID="45501435-01c6-448e-84ec-3fee833f0274" containerName="dnsmasq-dns" containerID="cri-o://441f5d9cb435a9c62273c3127151021871fdd43204a69b9959f338cc2a9f84bb" gracePeriod=10 Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.491714 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.537983 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.547303 4680 scope.go:117] "RemoveContainer" containerID="1e2d7da058b2a96e355c455398552f86c3edbb2e67dca4d5e7030eb7354a8dd0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.566214 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "023f6cd4-af35-4af3-8588-fdd4a49dc658" (UID: "023f6cd4-af35-4af3-8588-fdd4a49dc658"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.576809 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-config" (OuterVolumeSpecName: "config") pod "023f6cd4-af35-4af3-8588-fdd4a49dc658" (UID: "023f6cd4-af35-4af3-8588-fdd4a49dc658"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.578110 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-config\") pod \"023f6cd4-af35-4af3-8588-fdd4a49dc658\" (UID: \"023f6cd4-af35-4af3-8588-fdd4a49dc658\") " Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.580693 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:02:09 crc kubenswrapper[4680]: E0217 18:02:09.581071 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023f6cd4-af35-4af3-8588-fdd4a49dc658" containerName="dnsmasq-dns" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.581082 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="023f6cd4-af35-4af3-8588-fdd4a49dc658" containerName="dnsmasq-dns" Feb 17 18:02:09 crc kubenswrapper[4680]: E0217 18:02:09.581105 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023f6cd4-af35-4af3-8588-fdd4a49dc658" containerName="init" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.581111 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="023f6cd4-af35-4af3-8588-fdd4a49dc658" containerName="init" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.581294 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="023f6cd4-af35-4af3-8588-fdd4a49dc658" containerName="dnsmasq-dns" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.582245 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:09 crc kubenswrapper[4680]: W0217 18:02:09.582385 4680 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/023f6cd4-af35-4af3-8588-fdd4a49dc658/volumes/kubernetes.io~configmap/config Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.582400 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-config" (OuterVolumeSpecName: "config") pod "023f6cd4-af35-4af3-8588-fdd4a49dc658" (UID: "023f6cd4-af35-4af3-8588-fdd4a49dc658"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.582776 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.591817 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.592023 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.600774 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.625811 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" podStartSLOduration=4.625788116 podStartE2EDuration="4.625788116s" podCreationTimestamp="2026-02-17 18:02:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:02:09.54335907 +0000 UTC m=+1059.094257759" watchObservedRunningTime="2026-02-17 18:02:09.625788116 +0000 UTC m=+1059.176686785" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.683452 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-scripts\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.683513 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f5f74b1-9667-421c-94e6-484208c36f55-run-httpd\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.683581 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f5f74b1-9667-421c-94e6-484208c36f55-log-httpd\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.683655 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.683730 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.683755 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-config-data\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.683791 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw45w\" (UniqueName: \"kubernetes.io/projected/7f5f74b1-9667-421c-94e6-484208c36f55-kube-api-access-bw45w\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.683860 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/023f6cd4-af35-4af3-8588-fdd4a49dc658-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.786341 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f5f74b1-9667-421c-94e6-484208c36f55-run-httpd\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.787883 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f5f74b1-9667-421c-94e6-484208c36f55-log-httpd\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.788059 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.788227 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.788273 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-config-data\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.788337 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw45w\" (UniqueName: \"kubernetes.io/projected/7f5f74b1-9667-421c-94e6-484208c36f55-kube-api-access-bw45w\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.788398 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-scripts\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.788600 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f5f74b1-9667-421c-94e6-484208c36f55-log-httpd\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.787717 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f5f74b1-9667-421c-94e6-484208c36f55-run-httpd\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.798743 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-scripts\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.803002 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.816048 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw45w\" (UniqueName: \"kubernetes.io/projected/7f5f74b1-9667-421c-94e6-484208c36f55-kube-api-access-bw45w\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.820929 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.825331 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-config-data\") pod \"ceilometer-0\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " pod="openstack/ceilometer-0" Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.835970 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fcrz7"] Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.844374 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-fcrz7"] Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.897897 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bfbfb7db7-m428m"] Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.927908 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-4xhkb"] Feb 17 18:02:09 crc kubenswrapper[4680]: I0217 18:02:09.958081 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.038548 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dkvms"] Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.319356 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-76rb8"] Feb 17 18:02:10 crc kubenswrapper[4680]: W0217 18:02:10.321483 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2c53952_8df2_4f30_ae29_5778822cae38.slice/crio-a82aa3f2f83a7e94ea65fbc513557beaa7a2052d473d46c663ab4645ee9b734f WatchSource:0}: Error finding container a82aa3f2f83a7e94ea65fbc513557beaa7a2052d473d46c663ab4645ee9b734f: Status 404 returned error can't find the container with id a82aa3f2f83a7e94ea65fbc513557beaa7a2052d473d46c663ab4645ee9b734f Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.346200 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9458w"] Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.508603 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" event={"ID":"a17525b6-7d49-463e-9b5a-a8e0da0912ea","Type":"ContainerStarted","Data":"f977bd6e1d3831b6921ec85d86b581aade54d8fa4acf8fe229a30308f4658b24"} Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.508886 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" event={"ID":"a17525b6-7d49-463e-9b5a-a8e0da0912ea","Type":"ContainerStarted","Data":"31bd6c1c7a655c369e1dcd37e0d075c3d5e9fb72a1a6bc1d6387d1939e1a4864"} Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.511675 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-76rb8" event={"ID":"b2c53952-8df2-4f30-ae29-5778822cae38","Type":"ContainerStarted","Data":"a82aa3f2f83a7e94ea65fbc513557beaa7a2052d473d46c663ab4645ee9b734f"} Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.520175 4680 generic.go:334] "Generic (PLEG): container finished" podID="45501435-01c6-448e-84ec-3fee833f0274" containerID="441f5d9cb435a9c62273c3127151021871fdd43204a69b9959f338cc2a9f84bb" exitCode=0 Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.520250 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" event={"ID":"45501435-01c6-448e-84ec-3fee833f0274","Type":"ContainerDied","Data":"441f5d9cb435a9c62273c3127151021871fdd43204a69b9959f338cc2a9f84bb"} Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.523661 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bfbfb7db7-m428m" event={"ID":"2d39666f-eb25-414e-a508-4b437b45254f","Type":"ContainerStarted","Data":"1c2d288c91c0616078d3992579052617a728a5d296b8962591e7fef047770db0"} Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.526211 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9458w" event={"ID":"f869168f-8063-46a6-b574-d8be569ac733","Type":"ContainerStarted","Data":"8e7e5abb95fbbc5de30b539e57d3f09b27baf2e3d4225f72dbe3841faa7ed277"} Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.546193 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dkvms" event={"ID":"da3851c2-401e-446e-8677-e9fb12b8fd9f","Type":"ContainerStarted","Data":"394cb7193dd074b518155626ecb88130290e11d93fd280dd23d4a7e3acbb5d92"} Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.546235 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dkvms" event={"ID":"da3851c2-401e-446e-8677-e9fb12b8fd9f","Type":"ContainerStarted","Data":"debde9a5c511aa88a405b10dfe104f9ebd7e5c3262484b4aa679db95ba5abe4e"} Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.574060 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dkvms" podStartSLOduration=3.574037202 podStartE2EDuration="3.574037202s" podCreationTimestamp="2026-02-17 18:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:02:10.570267254 +0000 UTC m=+1060.121165943" watchObservedRunningTime="2026-02-17 18:02:10.574037202 +0000 UTC m=+1060.124935881" Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.855275 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.918745 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-config\") pod \"45501435-01c6-448e-84ec-3fee833f0274\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.918801 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m2mz\" (UniqueName: \"kubernetes.io/projected/45501435-01c6-448e-84ec-3fee833f0274-kube-api-access-6m2mz\") pod \"45501435-01c6-448e-84ec-3fee833f0274\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.919087 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-ovsdbserver-nb\") pod \"45501435-01c6-448e-84ec-3fee833f0274\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.919198 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-dns-swift-storage-0\") pod \"45501435-01c6-448e-84ec-3fee833f0274\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.919237 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-dns-svc\") pod \"45501435-01c6-448e-84ec-3fee833f0274\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.919358 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-ovsdbserver-sb\") pod \"45501435-01c6-448e-84ec-3fee833f0274\" (UID: \"45501435-01c6-448e-84ec-3fee833f0274\") " Feb 17 18:02:10 crc kubenswrapper[4680]: I0217 18:02:10.930571 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45501435-01c6-448e-84ec-3fee833f0274-kube-api-access-6m2mz" (OuterVolumeSpecName: "kube-api-access-6m2mz") pod "45501435-01c6-448e-84ec-3fee833f0274" (UID: "45501435-01c6-448e-84ec-3fee833f0274"). InnerVolumeSpecName "kube-api-access-6m2mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.008979 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "45501435-01c6-448e-84ec-3fee833f0274" (UID: "45501435-01c6-448e-84ec-3fee833f0274"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.022086 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.022126 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m2mz\" (UniqueName: \"kubernetes.io/projected/45501435-01c6-448e-84ec-3fee833f0274-kube-api-access-6m2mz\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.168951 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45501435-01c6-448e-84ec-3fee833f0274" (UID: "45501435-01c6-448e-84ec-3fee833f0274"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.215518 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="023f6cd4-af35-4af3-8588-fdd4a49dc658" path="/var/lib/kubelet/pods/023f6cd4-af35-4af3-8588-fdd4a49dc658/volumes" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.247419 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:11 crc kubenswrapper[4680]: W0217 18:02:11.304012 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb8cfca8_b7f0_4b3f_85f0_85b9c965cd9b.slice/crio-b0a0271385ff4dbaf12e77bf10f6fc26d5c5d0f5ae435c0b5380572a0e6e3c2f WatchSource:0}: Error finding container b0a0271385ff4dbaf12e77bf10f6fc26d5c5d0f5ae435c0b5380572a0e6e3c2f: Status 404 returned error can't find the container with id b0a0271385ff4dbaf12e77bf10f6fc26d5c5d0f5ae435c0b5380572a0e6e3c2f Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.304113 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "45501435-01c6-448e-84ec-3fee833f0274" (UID: "45501435-01c6-448e-84ec-3fee833f0274"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.304471 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gvq9l"] Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.304506 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-477hc"] Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.332903 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-config" (OuterVolumeSpecName: "config") pod "45501435-01c6-448e-84ec-3fee833f0274" (UID: "45501435-01c6-448e-84ec-3fee833f0274"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.347919 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.359855 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.361547 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.362847 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.364992 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "45501435-01c6-448e-84ec-3fee833f0274" (UID: "45501435-01c6-448e-84ec-3fee833f0274"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.463515 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-config\") pod \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.463912 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-dns-swift-storage-0\") pod \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.463968 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-ovsdbserver-sb\") pod \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.464157 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-ovsdbserver-nb\") pod \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.464216 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-dns-svc\") pod \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.464264 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh6wg\" (UniqueName: \"kubernetes.io/projected/a17525b6-7d49-463e-9b5a-a8e0da0912ea-kube-api-access-wh6wg\") pod \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\" (UID: \"a17525b6-7d49-463e-9b5a-a8e0da0912ea\") " Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.466921 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45501435-01c6-448e-84ec-3fee833f0274-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.492150 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bn2nz"] Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.505698 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7ffb7b86bf-9ttrt"] Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.564624 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bn2nz" event={"ID":"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc","Type":"ContainerStarted","Data":"03b4fb41b2bec6e1fe83bff14ae0278f36ee6aabf7a42ec9ab1372f6acfbea74"} Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.581967 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.590525 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a17525b6-7d49-463e-9b5a-a8e0da0912ea-kube-api-access-wh6wg" (OuterVolumeSpecName: "kube-api-access-wh6wg") pod "a17525b6-7d49-463e-9b5a-a8e0da0912ea" (UID: "a17525b6-7d49-463e-9b5a-a8e0da0912ea"). InnerVolumeSpecName "kube-api-access-wh6wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.598857 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a17525b6-7d49-463e-9b5a-a8e0da0912ea" (UID: "a17525b6-7d49-463e-9b5a-a8e0da0912ea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.599326 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-477hc" event={"ID":"a49d620c-c0c2-41cb-b458-cb67351334ad","Type":"ContainerStarted","Data":"85672a985476f03f15f3499fc791e704fa24e518b6c3032214057291ce8f1108"} Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.599708 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-config" (OuterVolumeSpecName: "config") pod "a17525b6-7d49-463e-9b5a-a8e0da0912ea" (UID: "a17525b6-7d49-463e-9b5a-a8e0da0912ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.600671 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a17525b6-7d49-463e-9b5a-a8e0da0912ea" (UID: "a17525b6-7d49-463e-9b5a-a8e0da0912ea"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.606057 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a17525b6-7d49-463e-9b5a-a8e0da0912ea" (UID: "a17525b6-7d49-463e-9b5a-a8e0da0912ea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.620803 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a17525b6-7d49-463e-9b5a-a8e0da0912ea" (UID: "a17525b6-7d49-463e-9b5a-a8e0da0912ea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.664787 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.665550 4680 generic.go:334] "Generic (PLEG): container finished" podID="a17525b6-7d49-463e-9b5a-a8e0da0912ea" containerID="f977bd6e1d3831b6921ec85d86b581aade54d8fa4acf8fe229a30308f4658b24" exitCode=0 Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.665723 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" event={"ID":"a17525b6-7d49-463e-9b5a-a8e0da0912ea","Type":"ContainerDied","Data":"f977bd6e1d3831b6921ec85d86b581aade54d8fa4acf8fe229a30308f4658b24"} Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.665746 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" event={"ID":"a17525b6-7d49-463e-9b5a-a8e0da0912ea","Type":"ContainerDied","Data":"31bd6c1c7a655c369e1dcd37e0d075c3d5e9fb72a1a6bc1d6387d1939e1a4864"} Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.665765 4680 scope.go:117] "RemoveContainer" containerID="f977bd6e1d3831b6921ec85d86b581aade54d8fa4acf8fe229a30308f4658b24" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.665895 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-4xhkb" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.670826 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.670867 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.670882 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh6wg\" (UniqueName: \"kubernetes.io/projected/a17525b6-7d49-463e-9b5a-a8e0da0912ea-kube-api-access-wh6wg\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.670895 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.670907 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.670917 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a17525b6-7d49-463e-9b5a-a8e0da0912ea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.686332 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ffb7b86bf-9ttrt" event={"ID":"6f610194-e55a-4fe1-821a-23921b314518","Type":"ContainerStarted","Data":"9434e583d5064db22e372b975e6bb82f17446614717caf4d60fa7a9b4647eca6"} Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.709460 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfcfa220-e137-42fd-a4ab-8b1682720cb1","Type":"ContainerStarted","Data":"4e7d6c11b4d4db37503b9dcdaeb9cf4be984a1abbc94abf799796bce7b9643d7"} Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.731158 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" event={"ID":"45501435-01c6-448e-84ec-3fee833f0274","Type":"ContainerDied","Data":"73a3c8cf3bf76ea5c3b26b6f8eb1913aea715d02b26e1b4078f351cd1121011b"} Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.731268 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-687dh" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.763295 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" event={"ID":"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b","Type":"ContainerStarted","Data":"b0a0271385ff4dbaf12e77bf10f6fc26d5c5d0f5ae435c0b5380572a0e6e3c2f"} Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.767575 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-4xhkb"] Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.781725 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-4xhkb"] Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.806998 4680 scope.go:117] "RemoveContainer" containerID="f977bd6e1d3831b6921ec85d86b581aade54d8fa4acf8fe229a30308f4658b24" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.811944 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-687dh"] Feb 17 18:02:11 crc kubenswrapper[4680]: E0217 18:02:11.819291 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f977bd6e1d3831b6921ec85d86b581aade54d8fa4acf8fe229a30308f4658b24\": container with ID starting with f977bd6e1d3831b6921ec85d86b581aade54d8fa4acf8fe229a30308f4658b24 not found: ID does not exist" containerID="f977bd6e1d3831b6921ec85d86b581aade54d8fa4acf8fe229a30308f4658b24" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.819355 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f977bd6e1d3831b6921ec85d86b581aade54d8fa4acf8fe229a30308f4658b24"} err="failed to get container status \"f977bd6e1d3831b6921ec85d86b581aade54d8fa4acf8fe229a30308f4658b24\": rpc error: code = NotFound desc = could not find container \"f977bd6e1d3831b6921ec85d86b581aade54d8fa4acf8fe229a30308f4658b24\": container with ID starting with f977bd6e1d3831b6921ec85d86b581aade54d8fa4acf8fe229a30308f4658b24 not found: ID does not exist" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.819388 4680 scope.go:117] "RemoveContainer" containerID="441f5d9cb435a9c62273c3127151021871fdd43204a69b9959f338cc2a9f84bb" Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.826042 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-687dh"] Feb 17 18:02:11 crc kubenswrapper[4680]: I0217 18:02:11.940597 4680 scope.go:117] "RemoveContainer" containerID="9613b672e030a0c261ed676a0916da714ad4f13a44d258e27365a3dffa9d3145" Feb 17 18:02:12 crc kubenswrapper[4680]: I0217 18:02:12.773163 4680 generic.go:334] "Generic (PLEG): container finished" podID="cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b" containerID="0b53c527230ed8c8bf0428a4c6601b78aa8af438049cb5807e01ef617b369d2f" exitCode=0 Feb 17 18:02:12 crc kubenswrapper[4680]: I0217 18:02:12.773464 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" event={"ID":"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b","Type":"ContainerDied","Data":"0b53c527230ed8c8bf0428a4c6601b78aa8af438049cb5807e01ef617b369d2f"} Feb 17 18:02:12 crc kubenswrapper[4680]: I0217 18:02:12.779148 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bn2nz" event={"ID":"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc","Type":"ContainerStarted","Data":"6f25aa11e0c3556a1b98a0ccdf89d777fde4b0e9166e86c6f664f7ceb38d94ae"} Feb 17 18:02:12 crc kubenswrapper[4680]: I0217 18:02:12.858193 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0b1bd6-c80f-499f-9eb8-3038c2353027","Type":"ContainerStarted","Data":"c60681a1b3bb9e18d96014e6fa5de8d1c0f3b556b9c6b9138f8db774a124bc47"} Feb 17 18:02:12 crc kubenswrapper[4680]: I0217 18:02:12.873103 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f5f74b1-9667-421c-94e6-484208c36f55","Type":"ContainerStarted","Data":"cb6b02e647128bc7add226d18b92b64306f1cff3860e100908fd44ed5671ed59"} Feb 17 18:02:12 crc kubenswrapper[4680]: I0217 18:02:12.874817 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-bn2nz" podStartSLOduration=4.874772647 podStartE2EDuration="4.874772647s" podCreationTimestamp="2026-02-17 18:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:02:12.833947896 +0000 UTC m=+1062.384846575" watchObservedRunningTime="2026-02-17 18:02:12.874772647 +0000 UTC m=+1062.425671326" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.186954 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45501435-01c6-448e-84ec-3fee833f0274" path="/var/lib/kubelet/pods/45501435-01c6-448e-84ec-3fee833f0274/volumes" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.188110 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a17525b6-7d49-463e-9b5a-a8e0da0912ea" path="/var/lib/kubelet/pods/a17525b6-7d49-463e-9b5a-a8e0da0912ea/volumes" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.195604 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.211400 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bfbfb7db7-m428m"] Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.293188 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-c489b55b9-9nhw5"] Feb 17 18:02:13 crc kubenswrapper[4680]: E0217 18:02:13.293621 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45501435-01c6-448e-84ec-3fee833f0274" containerName="init" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.293641 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="45501435-01c6-448e-84ec-3fee833f0274" containerName="init" Feb 17 18:02:13 crc kubenswrapper[4680]: E0217 18:02:13.293653 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a17525b6-7d49-463e-9b5a-a8e0da0912ea" containerName="init" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.293659 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a17525b6-7d49-463e-9b5a-a8e0da0912ea" containerName="init" Feb 17 18:02:13 crc kubenswrapper[4680]: E0217 18:02:13.293681 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45501435-01c6-448e-84ec-3fee833f0274" containerName="dnsmasq-dns" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.293687 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="45501435-01c6-448e-84ec-3fee833f0274" containerName="dnsmasq-dns" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.294593 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="45501435-01c6-448e-84ec-3fee833f0274" containerName="dnsmasq-dns" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.294633 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="a17525b6-7d49-463e-9b5a-a8e0da0912ea" containerName="init" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.341052 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.385719 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c489b55b9-9nhw5"] Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.479971 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6e2a20ce-72be-4602-99f0-c1fb77919798-horizon-secret-key\") pod \"horizon-c489b55b9-9nhw5\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.480090 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e2a20ce-72be-4602-99f0-c1fb77919798-logs\") pod \"horizon-c489b55b9-9nhw5\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.480163 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e2a20ce-72be-4602-99f0-c1fb77919798-config-data\") pod \"horizon-c489b55b9-9nhw5\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.480196 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e2a20ce-72be-4602-99f0-c1fb77919798-scripts\") pod \"horizon-c489b55b9-9nhw5\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.480248 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc4jq\" (UniqueName: \"kubernetes.io/projected/6e2a20ce-72be-4602-99f0-c1fb77919798-kube-api-access-zc4jq\") pod \"horizon-c489b55b9-9nhw5\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.491551 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.541432 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.581525 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc4jq\" (UniqueName: \"kubernetes.io/projected/6e2a20ce-72be-4602-99f0-c1fb77919798-kube-api-access-zc4jq\") pod \"horizon-c489b55b9-9nhw5\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.581658 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6e2a20ce-72be-4602-99f0-c1fb77919798-horizon-secret-key\") pod \"horizon-c489b55b9-9nhw5\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.581692 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e2a20ce-72be-4602-99f0-c1fb77919798-logs\") pod \"horizon-c489b55b9-9nhw5\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.581736 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e2a20ce-72be-4602-99f0-c1fb77919798-config-data\") pod \"horizon-c489b55b9-9nhw5\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.581755 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e2a20ce-72be-4602-99f0-c1fb77919798-scripts\") pod \"horizon-c489b55b9-9nhw5\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.584352 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e2a20ce-72be-4602-99f0-c1fb77919798-scripts\") pod \"horizon-c489b55b9-9nhw5\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.584721 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e2a20ce-72be-4602-99f0-c1fb77919798-logs\") pod \"horizon-c489b55b9-9nhw5\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.586158 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e2a20ce-72be-4602-99f0-c1fb77919798-config-data\") pod \"horizon-c489b55b9-9nhw5\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.613255 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6e2a20ce-72be-4602-99f0-c1fb77919798-horizon-secret-key\") pod \"horizon-c489b55b9-9nhw5\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.616520 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc4jq\" (UniqueName: \"kubernetes.io/projected/6e2a20ce-72be-4602-99f0-c1fb77919798-kube-api-access-zc4jq\") pod \"horizon-c489b55b9-9nhw5\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.725025 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.920574 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfcfa220-e137-42fd-a4ab-8b1682720cb1","Type":"ContainerStarted","Data":"9c3fc1d04aa7953dd233d5ad48fc26b0554ac7d3bc060b82be211bf7f82db047"} Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.935713 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" event={"ID":"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b","Type":"ContainerStarted","Data":"9f4f97d0967d5e214dbb2c0f6bd53d008f02270da4a5a54447f2f6e7d0c9eec6"} Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.935825 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:13 crc kubenswrapper[4680]: I0217 18:02:13.983183 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" podStartSLOduration=5.983158477 podStartE2EDuration="5.983158477s" podCreationTimestamp="2026-02-17 18:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:02:13.965611496 +0000 UTC m=+1063.516510175" watchObservedRunningTime="2026-02-17 18:02:13.983158477 +0000 UTC m=+1063.534057156" Feb 17 18:02:14 crc kubenswrapper[4680]: I0217 18:02:14.582395 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c489b55b9-9nhw5"] Feb 17 18:02:14 crc kubenswrapper[4680]: W0217 18:02:14.613357 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e2a20ce_72be_4602_99f0_c1fb77919798.slice/crio-f1761eeb7470036380b55cd7796664791a82b02486a0cb9530d05c0a61690611 WatchSource:0}: Error finding container f1761eeb7470036380b55cd7796664791a82b02486a0cb9530d05c0a61690611: Status 404 returned error can't find the container with id f1761eeb7470036380b55cd7796664791a82b02486a0cb9530d05c0a61690611 Feb 17 18:02:14 crc kubenswrapper[4680]: I0217 18:02:14.960618 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0b1bd6-c80f-499f-9eb8-3038c2353027","Type":"ContainerStarted","Data":"921c2f7ebd5307800e63f8d5eb1f2a41f78ca5553aa7546f1f49cf28336b8cc0"} Feb 17 18:02:14 crc kubenswrapper[4680]: I0217 18:02:14.971551 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c489b55b9-9nhw5" event={"ID":"6e2a20ce-72be-4602-99f0-c1fb77919798","Type":"ContainerStarted","Data":"f1761eeb7470036380b55cd7796664791a82b02486a0cb9530d05c0a61690611"} Feb 17 18:02:15 crc kubenswrapper[4680]: I0217 18:02:15.986731 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfcfa220-e137-42fd-a4ab-8b1682720cb1","Type":"ContainerStarted","Data":"415bcc4c75dbbd456dd84a53d3b76ef2190b8898f992be300ee6e9de0f72cb60"} Feb 17 18:02:17 crc kubenswrapper[4680]: I0217 18:02:17.005630 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0b1bd6-c80f-499f-9eb8-3038c2353027","Type":"ContainerStarted","Data":"08f272d1abf2ffb44570f976fcfc98fd0d5fa5829e5fe2d47c40c9b2bf9b16d4"} Feb 17 18:02:17 crc kubenswrapper[4680]: I0217 18:02:17.005674 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dfcfa220-e137-42fd-a4ab-8b1682720cb1" containerName="glance-log" containerID="cri-o://9c3fc1d04aa7953dd233d5ad48fc26b0554ac7d3bc060b82be211bf7f82db047" gracePeriod=30 Feb 17 18:02:17 crc kubenswrapper[4680]: I0217 18:02:17.005935 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dfcfa220-e137-42fd-a4ab-8b1682720cb1" containerName="glance-httpd" containerID="cri-o://415bcc4c75dbbd456dd84a53d3b76ef2190b8898f992be300ee6e9de0f72cb60" gracePeriod=30 Feb 17 18:02:17 crc kubenswrapper[4680]: I0217 18:02:17.005776 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5b0b1bd6-c80f-499f-9eb8-3038c2353027" containerName="glance-log" containerID="cri-o://921c2f7ebd5307800e63f8d5eb1f2a41f78ca5553aa7546f1f49cf28336b8cc0" gracePeriod=30 Feb 17 18:02:17 crc kubenswrapper[4680]: I0217 18:02:17.005873 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5b0b1bd6-c80f-499f-9eb8-3038c2353027" containerName="glance-httpd" containerID="cri-o://08f272d1abf2ffb44570f976fcfc98fd0d5fa5829e5fe2d47c40c9b2bf9b16d4" gracePeriod=30 Feb 17 18:02:17 crc kubenswrapper[4680]: I0217 18:02:17.070283 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.070261557 podStartE2EDuration="9.070261557s" podCreationTimestamp="2026-02-17 18:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:02:17.038111869 +0000 UTC m=+1066.589010538" watchObservedRunningTime="2026-02-17 18:02:17.070261557 +0000 UTC m=+1066.621160236" Feb 17 18:02:17 crc kubenswrapper[4680]: I0217 18:02:17.075674 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.075652877 podStartE2EDuration="9.075652877s" podCreationTimestamp="2026-02-17 18:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:02:17.065040634 +0000 UTC m=+1066.615939313" watchObservedRunningTime="2026-02-17 18:02:17.075652877 +0000 UTC m=+1066.626551556" Feb 17 18:02:18 crc kubenswrapper[4680]: I0217 18:02:18.016441 4680 generic.go:334] "Generic (PLEG): container finished" podID="dfcfa220-e137-42fd-a4ab-8b1682720cb1" containerID="415bcc4c75dbbd456dd84a53d3b76ef2190b8898f992be300ee6e9de0f72cb60" exitCode=0 Feb 17 18:02:18 crc kubenswrapper[4680]: I0217 18:02:18.016730 4680 generic.go:334] "Generic (PLEG): container finished" podID="dfcfa220-e137-42fd-a4ab-8b1682720cb1" containerID="9c3fc1d04aa7953dd233d5ad48fc26b0554ac7d3bc060b82be211bf7f82db047" exitCode=143 Feb 17 18:02:18 crc kubenswrapper[4680]: I0217 18:02:18.016482 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfcfa220-e137-42fd-a4ab-8b1682720cb1","Type":"ContainerDied","Data":"415bcc4c75dbbd456dd84a53d3b76ef2190b8898f992be300ee6e9de0f72cb60"} Feb 17 18:02:18 crc kubenswrapper[4680]: I0217 18:02:18.016807 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfcfa220-e137-42fd-a4ab-8b1682720cb1","Type":"ContainerDied","Data":"9c3fc1d04aa7953dd233d5ad48fc26b0554ac7d3bc060b82be211bf7f82db047"} Feb 17 18:02:18 crc kubenswrapper[4680]: I0217 18:02:18.019429 4680 generic.go:334] "Generic (PLEG): container finished" podID="5b0b1bd6-c80f-499f-9eb8-3038c2353027" containerID="08f272d1abf2ffb44570f976fcfc98fd0d5fa5829e5fe2d47c40c9b2bf9b16d4" exitCode=0 Feb 17 18:02:18 crc kubenswrapper[4680]: I0217 18:02:18.019457 4680 generic.go:334] "Generic (PLEG): container finished" podID="5b0b1bd6-c80f-499f-9eb8-3038c2353027" containerID="921c2f7ebd5307800e63f8d5eb1f2a41f78ca5553aa7546f1f49cf28336b8cc0" exitCode=143 Feb 17 18:02:18 crc kubenswrapper[4680]: I0217 18:02:18.019477 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0b1bd6-c80f-499f-9eb8-3038c2353027","Type":"ContainerDied","Data":"08f272d1abf2ffb44570f976fcfc98fd0d5fa5829e5fe2d47c40c9b2bf9b16d4"} Feb 17 18:02:18 crc kubenswrapper[4680]: I0217 18:02:18.019495 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0b1bd6-c80f-499f-9eb8-3038c2353027","Type":"ContainerDied","Data":"921c2f7ebd5307800e63f8d5eb1f2a41f78ca5553aa7546f1f49cf28336b8cc0"} Feb 17 18:02:18 crc kubenswrapper[4680]: I0217 18:02:18.830395 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7ffb7b86bf-9ttrt"] Feb 17 18:02:18 crc kubenswrapper[4680]: I0217 18:02:18.862573 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8ccfcccc8-2qwb2"] Feb 17 18:02:18 crc kubenswrapper[4680]: I0217 18:02:18.863941 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:18 crc kubenswrapper[4680]: I0217 18:02:18.876223 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 17 18:02:18 crc kubenswrapper[4680]: I0217 18:02:18.884542 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8ccfcccc8-2qwb2"] Feb 17 18:02:18 crc kubenswrapper[4680]: I0217 18:02:18.951439 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c489b55b9-9nhw5"] Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.003259 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzj4q\" (UniqueName: \"kubernetes.io/projected/d7b7455b-130d-4711-9174-16e352b16455-kube-api-access-jzj4q\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.003336 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-horizon-tls-certs\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.003395 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-horizon-secret-key\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.003422 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7b7455b-130d-4711-9174-16e352b16455-config-data\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.003480 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7b7455b-130d-4711-9174-16e352b16455-scripts\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.003589 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-combined-ca-bundle\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.003612 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7b7455b-130d-4711-9174-16e352b16455-logs\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.057920 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6978d4c7cb-4k7vb"] Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.059845 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.099274 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6978d4c7cb-4k7vb"] Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.105427 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzj4q\" (UniqueName: \"kubernetes.io/projected/d7b7455b-130d-4711-9174-16e352b16455-kube-api-access-jzj4q\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.105493 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-horizon-tls-certs\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.105553 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-horizon-secret-key\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.105581 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7b7455b-130d-4711-9174-16e352b16455-config-data\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.105637 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7b7455b-130d-4711-9174-16e352b16455-scripts\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.105754 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-combined-ca-bundle\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.105779 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7b7455b-130d-4711-9174-16e352b16455-logs\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.106259 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7b7455b-130d-4711-9174-16e352b16455-logs\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.107274 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7b7455b-130d-4711-9174-16e352b16455-scripts\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.107618 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7b7455b-130d-4711-9174-16e352b16455-config-data\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.113748 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-horizon-tls-certs\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.116695 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-horizon-secret-key\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.123090 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-combined-ca-bundle\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.167606 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.183710 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzj4q\" (UniqueName: \"kubernetes.io/projected/d7b7455b-130d-4711-9174-16e352b16455-kube-api-access-jzj4q\") pod \"horizon-8ccfcccc8-2qwb2\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.199741 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.214944 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/91980b55-1251-4404-a22d-1572dd91ec8a-horizon-tls-certs\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.215280 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sc87\" (UniqueName: \"kubernetes.io/projected/91980b55-1251-4404-a22d-1572dd91ec8a-kube-api-access-9sc87\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.215446 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/91980b55-1251-4404-a22d-1572dd91ec8a-config-data\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.215633 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91980b55-1251-4404-a22d-1572dd91ec8a-logs\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.215761 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91980b55-1251-4404-a22d-1572dd91ec8a-combined-ca-bundle\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.215853 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/91980b55-1251-4404-a22d-1572dd91ec8a-horizon-secret-key\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.215996 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91980b55-1251-4404-a22d-1572dd91ec8a-scripts\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.320366 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/91980b55-1251-4404-a22d-1572dd91ec8a-horizon-tls-certs\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.320438 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sc87\" (UniqueName: \"kubernetes.io/projected/91980b55-1251-4404-a22d-1572dd91ec8a-kube-api-access-9sc87\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.320485 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/91980b55-1251-4404-a22d-1572dd91ec8a-config-data\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.320570 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91980b55-1251-4404-a22d-1572dd91ec8a-logs\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.320608 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91980b55-1251-4404-a22d-1572dd91ec8a-combined-ca-bundle\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.320630 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/91980b55-1251-4404-a22d-1572dd91ec8a-horizon-secret-key\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.320674 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91980b55-1251-4404-a22d-1572dd91ec8a-scripts\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.321879 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91980b55-1251-4404-a22d-1572dd91ec8a-logs\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.322771 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91980b55-1251-4404-a22d-1572dd91ec8a-scripts\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.324134 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/91980b55-1251-4404-a22d-1572dd91ec8a-config-data\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.333236 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/91980b55-1251-4404-a22d-1572dd91ec8a-horizon-tls-certs\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.334104 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91980b55-1251-4404-a22d-1572dd91ec8a-combined-ca-bundle\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.351925 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/91980b55-1251-4404-a22d-1572dd91ec8a-horizon-secret-key\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.355772 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sc87\" (UniqueName: \"kubernetes.io/projected/91980b55-1251-4404-a22d-1572dd91ec8a-kube-api-access-9sc87\") pod \"horizon-6978d4c7cb-4k7vb\" (UID: \"91980b55-1251-4404-a22d-1572dd91ec8a\") " pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.398102 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nm76h"] Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.398879 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-nm76h" podUID="644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" containerName="dnsmasq-dns" containerID="cri-o://24519869843f5e829b64e086c99cd62d1b0fb0ace031148021a74c4e616b832d" gracePeriod=10 Feb 17 18:02:19 crc kubenswrapper[4680]: I0217 18:02:19.414808 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:20 crc kubenswrapper[4680]: I0217 18:02:20.073216 4680 generic.go:334] "Generic (PLEG): container finished" podID="644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" containerID="24519869843f5e829b64e086c99cd62d1b0fb0ace031148021a74c4e616b832d" exitCode=0 Feb 17 18:02:20 crc kubenswrapper[4680]: I0217 18:02:20.073263 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nm76h" event={"ID":"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4","Type":"ContainerDied","Data":"24519869843f5e829b64e086c99cd62d1b0fb0ace031148021a74c4e616b832d"} Feb 17 18:02:21 crc kubenswrapper[4680]: I0217 18:02:21.100474 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nm76h" podUID="644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Feb 17 18:02:25 crc kubenswrapper[4680]: I0217 18:02:25.124785 4680 generic.go:334] "Generic (PLEG): container finished" podID="da3851c2-401e-446e-8677-e9fb12b8fd9f" containerID="394cb7193dd074b518155626ecb88130290e11d93fd280dd23d4a7e3acbb5d92" exitCode=0 Feb 17 18:02:25 crc kubenswrapper[4680]: I0217 18:02:25.125051 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dkvms" event={"ID":"da3851c2-401e-446e-8677-e9fb12b8fd9f","Type":"ContainerDied","Data":"394cb7193dd074b518155626ecb88130290e11d93fd280dd23d4a7e3acbb5d92"} Feb 17 18:02:28 crc kubenswrapper[4680]: E0217 18:02:28.976944 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 17 18:02:28 crc kubenswrapper[4680]: E0217 18:02:28.977693 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf9h5f8h8fhdch665h5bfh5cfhc7hb6h59bhc8h6fh5fbh94h7ch66hch566h69h9dhb7hd8h68fh66h67ch8dhc5h59bh5c7h559h8fhdbq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvdmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7ffb7b86bf-9ttrt_openstack(6f610194-e55a-4fe1-821a-23921b314518): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 18:02:28 crc kubenswrapper[4680]: E0217 18:02:28.980309 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7ffb7b86bf-9ttrt" podUID="6f610194-e55a-4fe1-821a-23921b314518" Feb 17 18:02:31 crc kubenswrapper[4680]: I0217 18:02:31.101454 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nm76h" podUID="644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Feb 17 18:02:32 crc kubenswrapper[4680]: E0217 18:02:32.841906 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 17 18:02:32 crc kubenswrapper[4680]: E0217 18:02:32.842381 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d6h5f4h65h5dbh66dhcbhf7h674h57bh78h547h64chdfh689h5d9h599h564h68bh546h546h57bh94h6fh56dh5d8h5dfh56fh5dbh645h5bbh657h67cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqsz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7bfbfb7db7-m428m_openstack(2d39666f-eb25-414e-a508-4b437b45254f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 18:02:32 crc kubenswrapper[4680]: E0217 18:02:32.845006 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7bfbfb7db7-m428m" podUID="2d39666f-eb25-414e-a508-4b437b45254f" Feb 17 18:02:35 crc kubenswrapper[4680]: I0217 18:02:35.589392 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:02:35 crc kubenswrapper[4680]: I0217 18:02:35.590545 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:02:36 crc kubenswrapper[4680]: I0217 18:02:36.102559 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nm76h" podUID="644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Feb 17 18:02:36 crc kubenswrapper[4680]: I0217 18:02:36.103115 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:02:38 crc kubenswrapper[4680]: I0217 18:02:38.870659 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 18:02:38 crc kubenswrapper[4680]: I0217 18:02:38.870964 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 18:02:39 crc kubenswrapper[4680]: I0217 18:02:39.538641 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 18:02:39 crc kubenswrapper[4680]: I0217 18:02:39.538695 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.103162 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nm76h" podUID="644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.472375 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.473361 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.484923 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.489751 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.600740 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-config\") pod \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.600804 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0b1bd6-c80f-499f-9eb8-3038c2353027-logs\") pod \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.600844 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-combined-ca-bundle\") pod \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.600890 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-credential-keys\") pod \"da3851c2-401e-446e-8677-e9fb12b8fd9f\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.600938 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f610194-e55a-4fe1-821a-23921b314518-scripts\") pod \"6f610194-e55a-4fe1-821a-23921b314518\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601001 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-ovsdbserver-nb\") pod \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601071 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f610194-e55a-4fe1-821a-23921b314518-logs\") pod \"6f610194-e55a-4fe1-821a-23921b314518\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601139 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6f610194-e55a-4fe1-821a-23921b314518-horizon-secret-key\") pod \"6f610194-e55a-4fe1-821a-23921b314518\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601180 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-679kk\" (UniqueName: \"kubernetes.io/projected/da3851c2-401e-446e-8677-e9fb12b8fd9f-kube-api-access-679kk\") pod \"da3851c2-401e-446e-8677-e9fb12b8fd9f\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601233 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj6kf\" (UniqueName: \"kubernetes.io/projected/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-kube-api-access-jj6kf\") pod \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601280 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-scripts\") pod \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601344 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b0b1bd6-c80f-499f-9eb8-3038c2353027-httpd-run\") pod \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601377 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-ovsdbserver-sb\") pod \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601403 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-combined-ca-bundle\") pod \"da3851c2-401e-446e-8677-e9fb12b8fd9f\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601467 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-fernet-keys\") pod \"da3851c2-401e-446e-8677-e9fb12b8fd9f\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601513 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f610194-e55a-4fe1-821a-23921b314518-config-data\") pod \"6f610194-e55a-4fe1-821a-23921b314518\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601539 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601578 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-config-data\") pod \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601623 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-config-data\") pod \"da3851c2-401e-446e-8677-e9fb12b8fd9f\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601649 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvdmv\" (UniqueName: \"kubernetes.io/projected/6f610194-e55a-4fe1-821a-23921b314518-kube-api-access-vvdmv\") pod \"6f610194-e55a-4fe1-821a-23921b314518\" (UID: \"6f610194-e55a-4fe1-821a-23921b314518\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601684 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-scripts\") pod \"da3851c2-401e-446e-8677-e9fb12b8fd9f\" (UID: \"da3851c2-401e-446e-8677-e9fb12b8fd9f\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601739 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-dns-svc\") pod \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\" (UID: \"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.601776 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbnfh\" (UniqueName: \"kubernetes.io/projected/5b0b1bd6-c80f-499f-9eb8-3038c2353027-kube-api-access-lbnfh\") pod \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\" (UID: \"5b0b1bd6-c80f-499f-9eb8-3038c2353027\") " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.602830 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b0b1bd6-c80f-499f-9eb8-3038c2353027-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5b0b1bd6-c80f-499f-9eb8-3038c2353027" (UID: "5b0b1bd6-c80f-499f-9eb8-3038c2353027"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.605912 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b0b1bd6-c80f-499f-9eb8-3038c2353027-logs" (OuterVolumeSpecName: "logs") pod "5b0b1bd6-c80f-499f-9eb8-3038c2353027" (UID: "5b0b1bd6-c80f-499f-9eb8-3038c2353027"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.606517 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f610194-e55a-4fe1-821a-23921b314518-logs" (OuterVolumeSpecName: "logs") pod "6f610194-e55a-4fe1-821a-23921b314518" (UID: "6f610194-e55a-4fe1-821a-23921b314518"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.607249 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f610194-e55a-4fe1-821a-23921b314518-config-data" (OuterVolumeSpecName: "config-data") pod "6f610194-e55a-4fe1-821a-23921b314518" (UID: "6f610194-e55a-4fe1-821a-23921b314518"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.614214 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b0b1bd6-c80f-499f-9eb8-3038c2353027-kube-api-access-lbnfh" (OuterVolumeSpecName: "kube-api-access-lbnfh") pod "5b0b1bd6-c80f-499f-9eb8-3038c2353027" (UID: "5b0b1bd6-c80f-499f-9eb8-3038c2353027"). InnerVolumeSpecName "kube-api-access-lbnfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.645088 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "da3851c2-401e-446e-8677-e9fb12b8fd9f" (UID: "da3851c2-401e-446e-8677-e9fb12b8fd9f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.646267 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da3851c2-401e-446e-8677-e9fb12b8fd9f-kube-api-access-679kk" (OuterVolumeSpecName: "kube-api-access-679kk") pod "da3851c2-401e-446e-8677-e9fb12b8fd9f" (UID: "da3851c2-401e-446e-8677-e9fb12b8fd9f"). InnerVolumeSpecName "kube-api-access-679kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.647663 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f610194-e55a-4fe1-821a-23921b314518-scripts" (OuterVolumeSpecName: "scripts") pod "6f610194-e55a-4fe1-821a-23921b314518" (UID: "6f610194-e55a-4fe1-821a-23921b314518"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.652946 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-kube-api-access-jj6kf" (OuterVolumeSpecName: "kube-api-access-jj6kf") pod "644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" (UID: "644b1f7b-2a70-42cc-b11e-e1c1862a8ad4"). InnerVolumeSpecName "kube-api-access-jj6kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.653115 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "5b0b1bd6-c80f-499f-9eb8-3038c2353027" (UID: "5b0b1bd6-c80f-499f-9eb8-3038c2353027"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.653513 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-scripts" (OuterVolumeSpecName: "scripts") pod "5b0b1bd6-c80f-499f-9eb8-3038c2353027" (UID: "5b0b1bd6-c80f-499f-9eb8-3038c2353027"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.655723 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f610194-e55a-4fe1-821a-23921b314518-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6f610194-e55a-4fe1-821a-23921b314518" (UID: "6f610194-e55a-4fe1-821a-23921b314518"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.655791 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "da3851c2-401e-446e-8677-e9fb12b8fd9f" (UID: "da3851c2-401e-446e-8677-e9fb12b8fd9f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.677443 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-scripts" (OuterVolumeSpecName: "scripts") pod "da3851c2-401e-446e-8677-e9fb12b8fd9f" (UID: "da3851c2-401e-446e-8677-e9fb12b8fd9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.679917 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f610194-e55a-4fe1-821a-23921b314518-kube-api-access-vvdmv" (OuterVolumeSpecName: "kube-api-access-vvdmv") pod "6f610194-e55a-4fe1-821a-23921b314518" (UID: "6f610194-e55a-4fe1-821a-23921b314518"). InnerVolumeSpecName "kube-api-access-vvdmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.680160 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b0b1bd6-c80f-499f-9eb8-3038c2353027" (UID: "5b0b1bd6-c80f-499f-9eb8-3038c2353027"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.704769 4680 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6f610194-e55a-4fe1-821a-23921b314518-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.704878 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-679kk\" (UniqueName: \"kubernetes.io/projected/da3851c2-401e-446e-8677-e9fb12b8fd9f-kube-api-access-679kk\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.704896 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj6kf\" (UniqueName: \"kubernetes.io/projected/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-kube-api-access-jj6kf\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.704909 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.704921 4680 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b0b1bd6-c80f-499f-9eb8-3038c2353027-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.704956 4680 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.704967 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f610194-e55a-4fe1-821a-23921b314518-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.705050 4680 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.705066 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.705104 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvdmv\" (UniqueName: \"kubernetes.io/projected/6f610194-e55a-4fe1-821a-23921b314518-kube-api-access-vvdmv\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.705118 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbnfh\" (UniqueName: \"kubernetes.io/projected/5b0b1bd6-c80f-499f-9eb8-3038c2353027-kube-api-access-lbnfh\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.705130 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0b1bd6-c80f-499f-9eb8-3038c2353027-logs\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.705141 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.705152 4680 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.705164 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f610194-e55a-4fe1-821a-23921b314518-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.705175 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f610194-e55a-4fe1-821a-23921b314518-logs\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.722908 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" (UID: "644b1f7b-2a70-42cc-b11e-e1c1862a8ad4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.727304 4680 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.730758 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-config-data" (OuterVolumeSpecName: "config-data") pod "da3851c2-401e-446e-8677-e9fb12b8fd9f" (UID: "da3851c2-401e-446e-8677-e9fb12b8fd9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.737646 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da3851c2-401e-446e-8677-e9fb12b8fd9f" (UID: "da3851c2-401e-446e-8677-e9fb12b8fd9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.740532 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-config-data" (OuterVolumeSpecName: "config-data") pod "5b0b1bd6-c80f-499f-9eb8-3038c2353027" (UID: "5b0b1bd6-c80f-499f-9eb8-3038c2353027"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.748085 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" (UID: "644b1f7b-2a70-42cc-b11e-e1c1862a8ad4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.756615 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-config" (OuterVolumeSpecName: "config") pod "644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" (UID: "644b1f7b-2a70-42cc-b11e-e1c1862a8ad4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.775283 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" (UID: "644b1f7b-2a70-42cc-b11e-e1c1862a8ad4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.808767 4680 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.808800 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0b1bd6-c80f-499f-9eb8-3038c2353027-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.808812 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.808824 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.808832 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.808841 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.808851 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da3851c2-401e-446e-8677-e9fb12b8fd9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:41 crc kubenswrapper[4680]: I0217 18:02:41.808860 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.273448 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0b1bd6-c80f-499f-9eb8-3038c2353027","Type":"ContainerDied","Data":"c60681a1b3bb9e18d96014e6fa5de8d1c0f3b556b9c6b9138f8db774a124bc47"} Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.274119 4680 scope.go:117] "RemoveContainer" containerID="08f272d1abf2ffb44570f976fcfc98fd0d5fa5829e5fe2d47c40c9b2bf9b16d4" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.274281 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.279290 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ffb7b86bf-9ttrt" event={"ID":"6f610194-e55a-4fe1-821a-23921b314518","Type":"ContainerDied","Data":"9434e583d5064db22e372b975e6bb82f17446614717caf4d60fa7a9b4647eca6"} Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.279404 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ffb7b86bf-9ttrt" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.291917 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nm76h" event={"ID":"644b1f7b-2a70-42cc-b11e-e1c1862a8ad4","Type":"ContainerDied","Data":"412f948aac5506d5dc69c8e28a19e643b44f689470e7d64903a86ba67435f3c2"} Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.291935 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nm76h" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.297401 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dkvms" event={"ID":"da3851c2-401e-446e-8677-e9fb12b8fd9f","Type":"ContainerDied","Data":"debde9a5c511aa88a405b10dfe104f9ebd7e5c3262484b4aa679db95ba5abe4e"} Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.297475 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="debde9a5c511aa88a405b10dfe104f9ebd7e5c3262484b4aa679db95ba5abe4e" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.297544 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dkvms" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.341803 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 18:02:42 crc kubenswrapper[4680]: E0217 18:02:42.379512 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 17 18:02:42 crc kubenswrapper[4680]: E0217 18:02:42.379683 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpgx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-477hc_openstack(a49d620c-c0c2-41cb-b458-cb67351334ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 18:02:42 crc kubenswrapper[4680]: E0217 18:02:42.380872 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-477hc" podUID="a49d620c-c0c2-41cb-b458-cb67351334ad" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.386383 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.435103 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 18:02:42 crc kubenswrapper[4680]: E0217 18:02:42.435571 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b0b1bd6-c80f-499f-9eb8-3038c2353027" containerName="glance-httpd" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.435584 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b0b1bd6-c80f-499f-9eb8-3038c2353027" containerName="glance-httpd" Feb 17 18:02:42 crc kubenswrapper[4680]: E0217 18:02:42.435603 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da3851c2-401e-446e-8677-e9fb12b8fd9f" containerName="keystone-bootstrap" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.435608 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="da3851c2-401e-446e-8677-e9fb12b8fd9f" containerName="keystone-bootstrap" Feb 17 18:02:42 crc kubenswrapper[4680]: E0217 18:02:42.435625 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b0b1bd6-c80f-499f-9eb8-3038c2353027" containerName="glance-log" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.435631 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b0b1bd6-c80f-499f-9eb8-3038c2353027" containerName="glance-log" Feb 17 18:02:42 crc kubenswrapper[4680]: E0217 18:02:42.435644 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" containerName="init" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.435651 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" containerName="init" Feb 17 18:02:42 crc kubenswrapper[4680]: E0217 18:02:42.435666 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" containerName="dnsmasq-dns" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.435671 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" containerName="dnsmasq-dns" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.435833 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b0b1bd6-c80f-499f-9eb8-3038c2353027" containerName="glance-httpd" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.435845 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b0b1bd6-c80f-499f-9eb8-3038c2353027" containerName="glance-log" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.435859 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="da3851c2-401e-446e-8677-e9fb12b8fd9f" containerName="keystone-bootstrap" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.435870 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" containerName="dnsmasq-dns" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.436757 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.440910 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.441085 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.446454 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.458713 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnbrz\" (UniqueName: \"kubernetes.io/projected/95585ae8-c7e0-415d-b239-3c078e4d0a7e-kube-api-access-jnbrz\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.458807 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.458841 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95585ae8-c7e0-415d-b239-3c078e4d0a7e-logs\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.458916 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.458937 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95585ae8-c7e0-415d-b239-3c078e4d0a7e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.458971 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.459002 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.459033 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.469016 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7ffb7b86bf-9ttrt"] Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.477727 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7ffb7b86bf-9ttrt"] Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.488774 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nm76h"] Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.496191 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nm76h"] Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.560678 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.560719 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95585ae8-c7e0-415d-b239-3c078e4d0a7e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.560757 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.560796 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.560832 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.560911 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnbrz\" (UniqueName: \"kubernetes.io/projected/95585ae8-c7e0-415d-b239-3c078e4d0a7e-kube-api-access-jnbrz\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.560982 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.561007 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95585ae8-c7e0-415d-b239-3c078e4d0a7e-logs\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.561051 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.562219 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95585ae8-c7e0-415d-b239-3c078e4d0a7e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.562413 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95585ae8-c7e0-415d-b239-3c078e4d0a7e-logs\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.566212 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.571182 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.583899 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.584385 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.595525 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnbrz\" (UniqueName: \"kubernetes.io/projected/95585ae8-c7e0-415d-b239-3c078e4d0a7e-kube-api-access-jnbrz\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.624699 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.765993 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dkvms"] Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.768119 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.775229 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dkvms"] Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.821113 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dgb89"] Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.822354 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.824857 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.825091 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.825276 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.825515 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.828122 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fww75" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.851422 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dgb89"] Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.882945 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-combined-ca-bundle\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.883073 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-scripts\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.883140 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-fernet-keys\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.883161 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz6ll\" (UniqueName: \"kubernetes.io/projected/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-kube-api-access-pz6ll\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.883198 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-credential-keys\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.883263 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-config-data\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.985358 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-credential-keys\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.985444 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-config-data\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.985507 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-combined-ca-bundle\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.985539 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-scripts\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.985587 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-fernet-keys\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.985607 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz6ll\" (UniqueName: \"kubernetes.io/projected/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-kube-api-access-pz6ll\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.992475 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-scripts\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.992838 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-credential-keys\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.995054 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-config-data\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:42 crc kubenswrapper[4680]: I0217 18:02:42.997967 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-combined-ca-bundle\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.014767 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz6ll\" (UniqueName: \"kubernetes.io/projected/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-kube-api-access-pz6ll\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.022784 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-fernet-keys\") pod \"keystone-bootstrap-dgb89\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.114980 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b0b1bd6-c80f-499f-9eb8-3038c2353027" path="/var/lib/kubelet/pods/5b0b1bd6-c80f-499f-9eb8-3038c2353027/volumes" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.116073 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" path="/var/lib/kubelet/pods/644b1f7b-2a70-42cc-b11e-e1c1862a8ad4/volumes" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.116698 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f610194-e55a-4fe1-821a-23921b314518" path="/var/lib/kubelet/pods/6f610194-e55a-4fe1-821a-23921b314518/volumes" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.117638 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da3851c2-401e-446e-8677-e9fb12b8fd9f" path="/var/lib/kubelet/pods/da3851c2-401e-446e-8677-e9fb12b8fd9f/volumes" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.140852 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.308485 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bfbfb7db7-m428m" event={"ID":"2d39666f-eb25-414e-a508-4b437b45254f","Type":"ContainerDied","Data":"1c2d288c91c0616078d3992579052617a728a5d296b8962591e7fef047770db0"} Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.308536 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c2d288c91c0616078d3992579052617a728a5d296b8962591e7fef047770db0" Feb 17 18:02:43 crc kubenswrapper[4680]: E0217 18:02:43.309911 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-477hc" podUID="a49d620c-c0c2-41cb-b458-cb67351334ad" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.314397 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.332692 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 18:02:43 crc kubenswrapper[4680]: E0217 18:02:43.339907 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 17 18:02:43 crc kubenswrapper[4680]: E0217 18:02:43.340188 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n557h559h564h5bbhc6h667h574h677h555h64bh575h54h5bh7fh598h679hf5h95hb7h66dhd7h545hd9h665hd4hc8h5c5h646hd8h7dh64fhf5q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bw45w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7f5f74b1-9667-421c-94e6-484208c36f55): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.390649 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d39666f-eb25-414e-a508-4b437b45254f-logs\") pod \"2d39666f-eb25-414e-a508-4b437b45254f\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.390911 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d39666f-eb25-414e-a508-4b437b45254f-scripts\") pod \"2d39666f-eb25-414e-a508-4b437b45254f\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.391053 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqsz5\" (UniqueName: \"kubernetes.io/projected/2d39666f-eb25-414e-a508-4b437b45254f-kube-api-access-wqsz5\") pod \"2d39666f-eb25-414e-a508-4b437b45254f\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.391202 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d39666f-eb25-414e-a508-4b437b45254f-horizon-secret-key\") pod \"2d39666f-eb25-414e-a508-4b437b45254f\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.391296 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d39666f-eb25-414e-a508-4b437b45254f-logs" (OuterVolumeSpecName: "logs") pod "2d39666f-eb25-414e-a508-4b437b45254f" (UID: "2d39666f-eb25-414e-a508-4b437b45254f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.391456 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d39666f-eb25-414e-a508-4b437b45254f-config-data\") pod \"2d39666f-eb25-414e-a508-4b437b45254f\" (UID: \"2d39666f-eb25-414e-a508-4b437b45254f\") " Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.391810 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d39666f-eb25-414e-a508-4b437b45254f-scripts" (OuterVolumeSpecName: "scripts") pod "2d39666f-eb25-414e-a508-4b437b45254f" (UID: "2d39666f-eb25-414e-a508-4b437b45254f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.392053 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d39666f-eb25-414e-a508-4b437b45254f-logs\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.392116 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d39666f-eb25-414e-a508-4b437b45254f-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.393191 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d39666f-eb25-414e-a508-4b437b45254f-config-data" (OuterVolumeSpecName: "config-data") pod "2d39666f-eb25-414e-a508-4b437b45254f" (UID: "2d39666f-eb25-414e-a508-4b437b45254f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.406889 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d39666f-eb25-414e-a508-4b437b45254f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2d39666f-eb25-414e-a508-4b437b45254f" (UID: "2d39666f-eb25-414e-a508-4b437b45254f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.407129 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d39666f-eb25-414e-a508-4b437b45254f-kube-api-access-wqsz5" (OuterVolumeSpecName: "kube-api-access-wqsz5") pod "2d39666f-eb25-414e-a508-4b437b45254f" (UID: "2d39666f-eb25-414e-a508-4b437b45254f"). InnerVolumeSpecName "kube-api-access-wqsz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.499236 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-combined-ca-bundle\") pod \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.499620 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-scripts\") pod \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.499662 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-config-data\") pod \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.499788 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfcfa220-e137-42fd-a4ab-8b1682720cb1-logs\") pod \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.499818 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.499880 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfcfa220-e137-42fd-a4ab-8b1682720cb1-httpd-run\") pod \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.499959 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ppts\" (UniqueName: \"kubernetes.io/projected/dfcfa220-e137-42fd-a4ab-8b1682720cb1-kube-api-access-4ppts\") pod \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\" (UID: \"dfcfa220-e137-42fd-a4ab-8b1682720cb1\") " Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.500476 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqsz5\" (UniqueName: \"kubernetes.io/projected/2d39666f-eb25-414e-a508-4b437b45254f-kube-api-access-wqsz5\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.500499 4680 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d39666f-eb25-414e-a508-4b437b45254f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.500511 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d39666f-eb25-414e-a508-4b437b45254f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.501015 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfcfa220-e137-42fd-a4ab-8b1682720cb1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dfcfa220-e137-42fd-a4ab-8b1682720cb1" (UID: "dfcfa220-e137-42fd-a4ab-8b1682720cb1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.501097 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfcfa220-e137-42fd-a4ab-8b1682720cb1-logs" (OuterVolumeSpecName: "logs") pod "dfcfa220-e137-42fd-a4ab-8b1682720cb1" (UID: "dfcfa220-e137-42fd-a4ab-8b1682720cb1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.524743 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "dfcfa220-e137-42fd-a4ab-8b1682720cb1" (UID: "dfcfa220-e137-42fd-a4ab-8b1682720cb1"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.533051 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfcfa220-e137-42fd-a4ab-8b1682720cb1-kube-api-access-4ppts" (OuterVolumeSpecName: "kube-api-access-4ppts") pod "dfcfa220-e137-42fd-a4ab-8b1682720cb1" (UID: "dfcfa220-e137-42fd-a4ab-8b1682720cb1"). InnerVolumeSpecName "kube-api-access-4ppts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.534864 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-scripts" (OuterVolumeSpecName: "scripts") pod "dfcfa220-e137-42fd-a4ab-8b1682720cb1" (UID: "dfcfa220-e137-42fd-a4ab-8b1682720cb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.539339 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfcfa220-e137-42fd-a4ab-8b1682720cb1" (UID: "dfcfa220-e137-42fd-a4ab-8b1682720cb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.600731 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-config-data" (OuterVolumeSpecName: "config-data") pod "dfcfa220-e137-42fd-a4ab-8b1682720cb1" (UID: "dfcfa220-e137-42fd-a4ab-8b1682720cb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.602304 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfcfa220-e137-42fd-a4ab-8b1682720cb1-logs\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.602611 4680 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.602690 4680 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfcfa220-e137-42fd-a4ab-8b1682720cb1-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.602776 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ppts\" (UniqueName: \"kubernetes.io/projected/dfcfa220-e137-42fd-a4ab-8b1682720cb1-kube-api-access-4ppts\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.602896 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.603003 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.603068 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfcfa220-e137-42fd-a4ab-8b1682720cb1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.635490 4680 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.705078 4680 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 17 18:02:43 crc kubenswrapper[4680]: I0217 18:02:43.721321 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8ccfcccc8-2qwb2"] Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.319218 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bfbfb7db7-m428m" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.319240 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.319288 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfcfa220-e137-42fd-a4ab-8b1682720cb1","Type":"ContainerDied","Data":"4e7d6c11b4d4db37503b9dcdaeb9cf4be984a1abbc94abf799796bce7b9643d7"} Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.422023 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bfbfb7db7-m428m"] Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.434389 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7bfbfb7db7-m428m"] Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.454375 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.465479 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.481874 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 18:02:44 crc kubenswrapper[4680]: E0217 18:02:44.482396 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfcfa220-e137-42fd-a4ab-8b1682720cb1" containerName="glance-httpd" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.482416 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfcfa220-e137-42fd-a4ab-8b1682720cb1" containerName="glance-httpd" Feb 17 18:02:44 crc kubenswrapper[4680]: E0217 18:02:44.482453 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfcfa220-e137-42fd-a4ab-8b1682720cb1" containerName="glance-log" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.482461 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfcfa220-e137-42fd-a4ab-8b1682720cb1" containerName="glance-log" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.482657 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfcfa220-e137-42fd-a4ab-8b1682720cb1" containerName="glance-log" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.482677 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfcfa220-e137-42fd-a4ab-8b1682720cb1" containerName="glance-httpd" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.483904 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.487425 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.488639 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.519379 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.621772 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a9d1eec-3c1b-405d-91d8-d76e5233e223-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.621856 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.621886 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.621948 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ddbc\" (UniqueName: \"kubernetes.io/projected/7a9d1eec-3c1b-405d-91d8-d76e5233e223-kube-api-access-8ddbc\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.622119 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-config-data\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.622366 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a9d1eec-3c1b-405d-91d8-d76e5233e223-logs\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.622488 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.622528 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-scripts\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.724508 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a9d1eec-3c1b-405d-91d8-d76e5233e223-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.724988 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.724942 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a9d1eec-3c1b-405d-91d8-d76e5233e223-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.725062 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.725195 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.725097 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ddbc\" (UniqueName: \"kubernetes.io/projected/7a9d1eec-3c1b-405d-91d8-d76e5233e223-kube-api-access-8ddbc\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.730078 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-config-data\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.730185 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a9d1eec-3c1b-405d-91d8-d76e5233e223-logs\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.730281 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.730321 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-scripts\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.731662 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a9d1eec-3c1b-405d-91d8-d76e5233e223-logs\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.733730 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.734403 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-scripts\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.737864 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-config-data\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.738542 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.743553 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ddbc\" (UniqueName: \"kubernetes.io/projected/7a9d1eec-3c1b-405d-91d8-d76e5233e223-kube-api-access-8ddbc\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.749075 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " pod="openstack/glance-default-external-api-0" Feb 17 18:02:44 crc kubenswrapper[4680]: I0217 18:02:44.826714 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 18:02:45 crc kubenswrapper[4680]: I0217 18:02:45.121684 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d39666f-eb25-414e-a508-4b437b45254f" path="/var/lib/kubelet/pods/2d39666f-eb25-414e-a508-4b437b45254f/volumes" Feb 17 18:02:45 crc kubenswrapper[4680]: I0217 18:02:45.122363 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfcfa220-e137-42fd-a4ab-8b1682720cb1" path="/var/lib/kubelet/pods/dfcfa220-e137-42fd-a4ab-8b1682720cb1/volumes" Feb 17 18:02:46 crc kubenswrapper[4680]: I0217 18:02:46.104480 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nm76h" podUID="644b1f7b-2a70-42cc-b11e-e1c1862a8ad4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Feb 17 18:02:46 crc kubenswrapper[4680]: I0217 18:02:46.973483 4680 scope.go:117] "RemoveContainer" containerID="921c2f7ebd5307800e63f8d5eb1f2a41f78ca5553aa7546f1f49cf28336b8cc0" Feb 17 18:02:47 crc kubenswrapper[4680]: I0217 18:02:47.176096 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6978d4c7cb-4k7vb"] Feb 17 18:02:47 crc kubenswrapper[4680]: I0217 18:02:47.359279 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8ccfcccc8-2qwb2" event={"ID":"d7b7455b-130d-4711-9174-16e352b16455","Type":"ContainerStarted","Data":"0323bb6a2829dff8da034eca9a4d41b1f2869b7eaa6ce03396273b8273424438"} Feb 17 18:02:47 crc kubenswrapper[4680]: I0217 18:02:47.375040 4680 scope.go:117] "RemoveContainer" containerID="24519869843f5e829b64e086c99cd62d1b0fb0ace031148021a74c4e616b832d" Feb 17 18:02:47 crc kubenswrapper[4680]: W0217 18:02:47.379104 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91980b55_1251_4404_a22d_1572dd91ec8a.slice/crio-95e6c735943f7b637f5234b57733b4e8d4e34a48b0b81b0ed4e98ca9d0a23011 WatchSource:0}: Error finding container 95e6c735943f7b637f5234b57733b4e8d4e34a48b0b81b0ed4e98ca9d0a23011: Status 404 returned error can't find the container with id 95e6c735943f7b637f5234b57733b4e8d4e34a48b0b81b0ed4e98ca9d0a23011 Feb 17 18:02:47 crc kubenswrapper[4680]: E0217 18:02:47.409282 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 17 18:02:47 crc kubenswrapper[4680]: E0217 18:02:47.409445 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtxmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-9458w_openstack(f869168f-8063-46a6-b574-d8be569ac733): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 18:02:47 crc kubenswrapper[4680]: E0217 18:02:47.410649 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-9458w" podUID="f869168f-8063-46a6-b574-d8be569ac733" Feb 17 18:02:47 crc kubenswrapper[4680]: I0217 18:02:47.422989 4680 scope.go:117] "RemoveContainer" containerID="33e4c088df5ba02ae56df5ad2e9a52f187d5ea04cbfd87b48896c6ede700d437" Feb 17 18:02:47 crc kubenswrapper[4680]: I0217 18:02:47.489504 4680 scope.go:117] "RemoveContainer" containerID="415bcc4c75dbbd456dd84a53d3b76ef2190b8898f992be300ee6e9de0f72cb60" Feb 17 18:02:47 crc kubenswrapper[4680]: I0217 18:02:47.572369 4680 scope.go:117] "RemoveContainer" containerID="9c3fc1d04aa7953dd233d5ad48fc26b0554ac7d3bc060b82be211bf7f82db047" Feb 17 18:02:47 crc kubenswrapper[4680]: I0217 18:02:47.630987 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 18:02:47 crc kubenswrapper[4680]: I0217 18:02:47.643001 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dgb89"] Feb 17 18:02:47 crc kubenswrapper[4680]: W0217 18:02:47.655510 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a9d1eec_3c1b_405d_91d8_d76e5233e223.slice/crio-7843243fe2f15eda2497ca1bb69b688ec71083f2dc81e79aeddc842c4db08b02 WatchSource:0}: Error finding container 7843243fe2f15eda2497ca1bb69b688ec71083f2dc81e79aeddc842c4db08b02: Status 404 returned error can't find the container with id 7843243fe2f15eda2497ca1bb69b688ec71083f2dc81e79aeddc842c4db08b02 Feb 17 18:02:47 crc kubenswrapper[4680]: W0217 18:02:47.658953 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbcf2418_ecf2_4c69_9129_88ebe5cab8a2.slice/crio-bbee8d8e2431784f3ab6f97447d8a7477b7efac599f30077e5b8dc51c9c10c38 WatchSource:0}: Error finding container bbee8d8e2431784f3ab6f97447d8a7477b7efac599f30077e5b8dc51c9c10c38: Status 404 returned error can't find the container with id bbee8d8e2431784f3ab6f97447d8a7477b7efac599f30077e5b8dc51c9c10c38 Feb 17 18:02:47 crc kubenswrapper[4680]: I0217 18:02:47.706691 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 18:02:48 crc kubenswrapper[4680]: I0217 18:02:48.373196 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dgb89" event={"ID":"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2","Type":"ContainerStarted","Data":"bbee8d8e2431784f3ab6f97447d8a7477b7efac599f30077e5b8dc51c9c10c38"} Feb 17 18:02:48 crc kubenswrapper[4680]: I0217 18:02:48.375187 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6978d4c7cb-4k7vb" event={"ID":"91980b55-1251-4404-a22d-1572dd91ec8a","Type":"ContainerStarted","Data":"95e6c735943f7b637f5234b57733b4e8d4e34a48b0b81b0ed4e98ca9d0a23011"} Feb 17 18:02:48 crc kubenswrapper[4680]: I0217 18:02:48.378073 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95585ae8-c7e0-415d-b239-3c078e4d0a7e","Type":"ContainerStarted","Data":"94be2f4ef7c0abc886f968595440a745a2dae7c7589c85f457804ba6d6daa4a2"} Feb 17 18:02:48 crc kubenswrapper[4680]: I0217 18:02:48.379825 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a9d1eec-3c1b-405d-91d8-d76e5233e223","Type":"ContainerStarted","Data":"7843243fe2f15eda2497ca1bb69b688ec71083f2dc81e79aeddc842c4db08b02"} Feb 17 18:02:48 crc kubenswrapper[4680]: E0217 18:02:48.384590 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-9458w" podUID="f869168f-8063-46a6-b574-d8be569ac733" Feb 17 18:02:51 crc kubenswrapper[4680]: I0217 18:02:51.425110 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a9d1eec-3c1b-405d-91d8-d76e5233e223","Type":"ContainerStarted","Data":"4b0d6ce47086ad55b4ef743d0232ba8f0f05cf9455ca4bb43c008f14ce05c4cc"} Feb 17 18:02:51 crc kubenswrapper[4680]: I0217 18:02:51.433600 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-76rb8" event={"ID":"b2c53952-8df2-4f30-ae29-5778822cae38","Type":"ContainerStarted","Data":"63c18c1503d5044c8df349e56802f1c81e24b1744c0c3f36b09a08879a1536f8"} Feb 17 18:02:51 crc kubenswrapper[4680]: I0217 18:02:51.458202 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-76rb8" podStartSLOduration=11.499430713 podStartE2EDuration="43.45818476s" podCreationTimestamp="2026-02-17 18:02:08 +0000 UTC" firstStartedPulling="2026-02-17 18:02:10.329249301 +0000 UTC m=+1059.880147980" lastFinishedPulling="2026-02-17 18:02:42.288003358 +0000 UTC m=+1091.838902027" observedRunningTime="2026-02-17 18:02:51.456742154 +0000 UTC m=+1101.007640833" watchObservedRunningTime="2026-02-17 18:02:51.45818476 +0000 UTC m=+1101.009083439" Feb 17 18:02:51 crc kubenswrapper[4680]: I0217 18:02:51.460538 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c489b55b9-9nhw5" event={"ID":"6e2a20ce-72be-4602-99f0-c1fb77919798","Type":"ContainerStarted","Data":"3475ac570d5fd7e9f2211e2e16bec30c65494169117464102e52c577e380646f"} Feb 17 18:02:51 crc kubenswrapper[4680]: I0217 18:02:51.476861 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dgb89" event={"ID":"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2","Type":"ContainerStarted","Data":"78b364685af07b8c110b1575898f77e9639129c3c8d218360fa60b41101601eb"} Feb 17 18:02:51 crc kubenswrapper[4680]: I0217 18:02:51.487334 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8ccfcccc8-2qwb2" event={"ID":"d7b7455b-130d-4711-9174-16e352b16455","Type":"ContainerStarted","Data":"5dbc357194ba5ec72add1e74a2d767f31fc9addc3c6a23c67bde601fb330b16d"} Feb 17 18:02:51 crc kubenswrapper[4680]: I0217 18:02:51.489088 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6978d4c7cb-4k7vb" event={"ID":"91980b55-1251-4404-a22d-1572dd91ec8a","Type":"ContainerStarted","Data":"f59a5e892949b54812a8a4035a2056ab8d7bd8ed564ec030ad59531dee151423"} Feb 17 18:02:51 crc kubenswrapper[4680]: I0217 18:02:51.490717 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95585ae8-c7e0-415d-b239-3c078e4d0a7e","Type":"ContainerStarted","Data":"782e545fb1f32b809f690585710f6b36072eddbf5a37bb9b577aae0fe02d7cdb"} Feb 17 18:02:51 crc kubenswrapper[4680]: I0217 18:02:51.503260 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dgb89" podStartSLOduration=9.503230473 podStartE2EDuration="9.503230473s" podCreationTimestamp="2026-02-17 18:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:02:51.495549083 +0000 UTC m=+1101.046447762" watchObservedRunningTime="2026-02-17 18:02:51.503230473 +0000 UTC m=+1101.054129152" Feb 17 18:02:52 crc kubenswrapper[4680]: I0217 18:02:52.515468 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6978d4c7cb-4k7vb" event={"ID":"91980b55-1251-4404-a22d-1572dd91ec8a","Type":"ContainerStarted","Data":"f3a8f4a5fdca81edfa583020c0921b198bd2adb68d7e11153d1ee645737ca8c6"} Feb 17 18:02:52 crc kubenswrapper[4680]: I0217 18:02:52.522828 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95585ae8-c7e0-415d-b239-3c078e4d0a7e","Type":"ContainerStarted","Data":"67d8c6a37e4e05cc774793d5c9349a72df97eb750ef23b13f24875fcbfbf089a"} Feb 17 18:02:52 crc kubenswrapper[4680]: I0217 18:02:52.526281 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a9d1eec-3c1b-405d-91d8-d76e5233e223","Type":"ContainerStarted","Data":"ed5ffd776e9ece6e132f39e2a1dcb7f02dc73a7b1dececc030f9aaa6aa149e4f"} Feb 17 18:02:52 crc kubenswrapper[4680]: I0217 18:02:52.529836 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c489b55b9-9nhw5" podUID="6e2a20ce-72be-4602-99f0-c1fb77919798" containerName="horizon-log" containerID="cri-o://3475ac570d5fd7e9f2211e2e16bec30c65494169117464102e52c577e380646f" gracePeriod=30 Feb 17 18:02:52 crc kubenswrapper[4680]: I0217 18:02:52.530153 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c489b55b9-9nhw5" event={"ID":"6e2a20ce-72be-4602-99f0-c1fb77919798","Type":"ContainerStarted","Data":"00719fd5adef0290617f1485cfa427f4dfba8d9b3f5d0851e127076421220cf7"} Feb 17 18:02:52 crc kubenswrapper[4680]: I0217 18:02:52.530236 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c489b55b9-9nhw5" podUID="6e2a20ce-72be-4602-99f0-c1fb77919798" containerName="horizon" containerID="cri-o://00719fd5adef0290617f1485cfa427f4dfba8d9b3f5d0851e127076421220cf7" gracePeriod=30 Feb 17 18:02:52 crc kubenswrapper[4680]: I0217 18:02:52.539429 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8ccfcccc8-2qwb2" event={"ID":"d7b7455b-130d-4711-9174-16e352b16455","Type":"ContainerStarted","Data":"8b01e5dcc6cc0527ee746e19a1edb564cad531d2e71631a44d11071138f3ae62"} Feb 17 18:02:52 crc kubenswrapper[4680]: I0217 18:02:52.557414 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6978d4c7cb-4k7vb" podStartSLOduration=34.557384321 podStartE2EDuration="34.557384321s" podCreationTimestamp="2026-02-17 18:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:02:52.545597241 +0000 UTC m=+1102.096495920" watchObservedRunningTime="2026-02-17 18:02:52.557384321 +0000 UTC m=+1102.108283010" Feb 17 18:02:52 crc kubenswrapper[4680]: I0217 18:02:52.607688 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8ccfcccc8-2qwb2" podStartSLOduration=34.607665289 podStartE2EDuration="34.607665289s" podCreationTimestamp="2026-02-17 18:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:02:52.583218172 +0000 UTC m=+1102.134116851" watchObservedRunningTime="2026-02-17 18:02:52.607665289 +0000 UTC m=+1102.158563968" Feb 17 18:02:53 crc kubenswrapper[4680]: I0217 18:02:53.580064 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-c489b55b9-9nhw5" podStartSLOduration=8.218365852 podStartE2EDuration="40.580036412s" podCreationTimestamp="2026-02-17 18:02:13 +0000 UTC" firstStartedPulling="2026-02-17 18:02:14.620361472 +0000 UTC m=+1064.171260151" lastFinishedPulling="2026-02-17 18:02:46.982032032 +0000 UTC m=+1096.532930711" observedRunningTime="2026-02-17 18:02:52.664379369 +0000 UTC m=+1102.215278058" watchObservedRunningTime="2026-02-17 18:02:53.580036412 +0000 UTC m=+1103.130935081" Feb 17 18:02:53 crc kubenswrapper[4680]: I0217 18:02:53.609333 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.60929011 podStartE2EDuration="9.60929011s" podCreationTimestamp="2026-02-17 18:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:02:53.578823483 +0000 UTC m=+1103.129722162" watchObservedRunningTime="2026-02-17 18:02:53.60929011 +0000 UTC m=+1103.160188789" Feb 17 18:02:53 crc kubenswrapper[4680]: I0217 18:02:53.615740 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.615712401 podStartE2EDuration="11.615712401s" podCreationTimestamp="2026-02-17 18:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:02:53.606376779 +0000 UTC m=+1103.157275458" watchObservedRunningTime="2026-02-17 18:02:53.615712401 +0000 UTC m=+1103.166611070" Feb 17 18:02:53 crc kubenswrapper[4680]: I0217 18:02:53.726002 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:02:54 crc kubenswrapper[4680]: I0217 18:02:54.827930 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 18:02:54 crc kubenswrapper[4680]: I0217 18:02:54.828006 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 18:02:54 crc kubenswrapper[4680]: I0217 18:02:54.885808 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 18:02:54 crc kubenswrapper[4680]: I0217 18:02:54.889938 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 18:02:55 crc kubenswrapper[4680]: I0217 18:02:55.572207 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 18:02:55 crc kubenswrapper[4680]: I0217 18:02:55.572339 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 18:02:59 crc kubenswrapper[4680]: I0217 18:02:59.200766 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:59 crc kubenswrapper[4680]: I0217 18:02:59.201434 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:02:59 crc kubenswrapper[4680]: I0217 18:02:59.415991 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:02:59 crc kubenswrapper[4680]: I0217 18:02:59.416058 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:03:01 crc kubenswrapper[4680]: I0217 18:03:01.657846 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f5f74b1-9667-421c-94e6-484208c36f55","Type":"ContainerStarted","Data":"17d510f98f9ce9a63a955c0961a6b17e0c210d04faf62622027c8f851a32bbab"} Feb 17 18:03:01 crc kubenswrapper[4680]: I0217 18:03:01.667009 4680 generic.go:334] "Generic (PLEG): container finished" podID="bbcf2418-ecf2-4c69-9129-88ebe5cab8a2" containerID="78b364685af07b8c110b1575898f77e9639129c3c8d218360fa60b41101601eb" exitCode=0 Feb 17 18:03:01 crc kubenswrapper[4680]: I0217 18:03:01.667106 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dgb89" event={"ID":"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2","Type":"ContainerDied","Data":"78b364685af07b8c110b1575898f77e9639129c3c8d218360fa60b41101601eb"} Feb 17 18:03:01 crc kubenswrapper[4680]: I0217 18:03:01.669537 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-477hc" event={"ID":"a49d620c-c0c2-41cb-b458-cb67351334ad","Type":"ContainerStarted","Data":"f115b6f2712360fedf2655bc1a7af5bb3d800f96bce75a80dcb9d60c9dc3edf5"} Feb 17 18:03:01 crc kubenswrapper[4680]: I0217 18:03:01.715784 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-477hc" podStartSLOduration=4.29006398 podStartE2EDuration="53.715765415s" podCreationTimestamp="2026-02-17 18:02:08 +0000 UTC" firstStartedPulling="2026-02-17 18:02:11.14154156 +0000 UTC m=+1060.692440239" lastFinishedPulling="2026-02-17 18:03:00.567242995 +0000 UTC m=+1110.118141674" observedRunningTime="2026-02-17 18:03:01.708675551 +0000 UTC m=+1111.259574240" watchObservedRunningTime="2026-02-17 18:03:01.715765415 +0000 UTC m=+1111.266664094" Feb 17 18:03:02 crc kubenswrapper[4680]: I0217 18:03:02.785117 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 18:03:02 crc kubenswrapper[4680]: I0217 18:03:02.785452 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 18:03:02 crc kubenswrapper[4680]: I0217 18:03:02.855196 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 18:03:02 crc kubenswrapper[4680]: I0217 18:03:02.926039 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.455905 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.543460 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-config-data\") pod \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.543559 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz6ll\" (UniqueName: \"kubernetes.io/projected/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-kube-api-access-pz6ll\") pod \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.543599 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-fernet-keys\") pod \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.543652 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-combined-ca-bundle\") pod \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.543685 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-credential-keys\") pod \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.543813 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-scripts\") pod \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\" (UID: \"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2\") " Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.554634 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bbcf2418-ecf2-4c69-9129-88ebe5cab8a2" (UID: "bbcf2418-ecf2-4c69-9129-88ebe5cab8a2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.555471 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-scripts" (OuterVolumeSpecName: "scripts") pod "bbcf2418-ecf2-4c69-9129-88ebe5cab8a2" (UID: "bbcf2418-ecf2-4c69-9129-88ebe5cab8a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.556442 4680 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.556493 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.558165 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bbcf2418-ecf2-4c69-9129-88ebe5cab8a2" (UID: "bbcf2418-ecf2-4c69-9129-88ebe5cab8a2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.573756 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-kube-api-access-pz6ll" (OuterVolumeSpecName: "kube-api-access-pz6ll") pod "bbcf2418-ecf2-4c69-9129-88ebe5cab8a2" (UID: "bbcf2418-ecf2-4c69-9129-88ebe5cab8a2"). InnerVolumeSpecName "kube-api-access-pz6ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.584672 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbcf2418-ecf2-4c69-9129-88ebe5cab8a2" (UID: "bbcf2418-ecf2-4c69-9129-88ebe5cab8a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.646496 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-config-data" (OuterVolumeSpecName: "config-data") pod "bbcf2418-ecf2-4c69-9129-88ebe5cab8a2" (UID: "bbcf2418-ecf2-4c69-9129-88ebe5cab8a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.657935 4680 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.657991 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.658007 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz6ll\" (UniqueName: \"kubernetes.io/projected/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-kube-api-access-pz6ll\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.658022 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.806781 4680 generic.go:334] "Generic (PLEG): container finished" podID="b2c53952-8df2-4f30-ae29-5778822cae38" containerID="63c18c1503d5044c8df349e56802f1c81e24b1744c0c3f36b09a08879a1536f8" exitCode=0 Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.806866 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-76rb8" event={"ID":"b2c53952-8df2-4f30-ae29-5778822cae38","Type":"ContainerDied","Data":"63c18c1503d5044c8df349e56802f1c81e24b1744c0c3f36b09a08879a1536f8"} Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.809405 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dgb89" event={"ID":"bbcf2418-ecf2-4c69-9129-88ebe5cab8a2","Type":"ContainerDied","Data":"bbee8d8e2431784f3ab6f97447d8a7477b7efac599f30077e5b8dc51c9c10c38"} Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.809437 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbee8d8e2431784f3ab6f97447d8a7477b7efac599f30077e5b8dc51c9c10c38" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.809473 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dgb89" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.809793 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.809817 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.891721 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-854d5d774d-kbx8n"] Feb 17 18:03:03 crc kubenswrapper[4680]: E0217 18:03:03.894330 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcf2418-ecf2-4c69-9129-88ebe5cab8a2" containerName="keystone-bootstrap" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.894354 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcf2418-ecf2-4c69-9129-88ebe5cab8a2" containerName="keystone-bootstrap" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.894583 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbcf2418-ecf2-4c69-9129-88ebe5cab8a2" containerName="keystone-bootstrap" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.895263 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.904653 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-854d5d774d-kbx8n"] Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.906708 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.907043 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.907344 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.907354 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fww75" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.907567 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.907609 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.962918 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-config-data\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.962963 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-internal-tls-certs\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.963081 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-credential-keys\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.963570 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-scripts\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.964233 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-fernet-keys\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.964300 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s2mv\" (UniqueName: \"kubernetes.io/projected/7370e5a4-158b-44db-8b30-e9201681787a-kube-api-access-9s2mv\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.964382 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-public-tls-certs\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:03 crc kubenswrapper[4680]: I0217 18:03:03.964525 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-combined-ca-bundle\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.069137 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-credential-keys\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.071098 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-scripts\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.071187 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-fernet-keys\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.071215 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s2mv\" (UniqueName: \"kubernetes.io/projected/7370e5a4-158b-44db-8b30-e9201681787a-kube-api-access-9s2mv\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.071282 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-public-tls-certs\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.071510 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-combined-ca-bundle\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.071559 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-config-data\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.071602 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-internal-tls-certs\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.081909 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-internal-tls-certs\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.084092 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-scripts\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.086071 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-fernet-keys\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.087619 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-credential-keys\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.091497 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-public-tls-certs\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.094549 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-config-data\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.104160 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s2mv\" (UniqueName: \"kubernetes.io/projected/7370e5a4-158b-44db-8b30-e9201681787a-kube-api-access-9s2mv\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.106033 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7370e5a4-158b-44db-8b30-e9201681787a-combined-ca-bundle\") pod \"keystone-854d5d774d-kbx8n\" (UID: \"7370e5a4-158b-44db-8b30-e9201681787a\") " pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.231615 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:04 crc kubenswrapper[4680]: I0217 18:03:04.936283 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-854d5d774d-kbx8n"] Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.186726 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-76rb8" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.338674 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc2t5\" (UniqueName: \"kubernetes.io/projected/b2c53952-8df2-4f30-ae29-5778822cae38-kube-api-access-hc2t5\") pod \"b2c53952-8df2-4f30-ae29-5778822cae38\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.338754 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-config-data\") pod \"b2c53952-8df2-4f30-ae29-5778822cae38\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.338797 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2c53952-8df2-4f30-ae29-5778822cae38-logs\") pod \"b2c53952-8df2-4f30-ae29-5778822cae38\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.338835 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-combined-ca-bundle\") pod \"b2c53952-8df2-4f30-ae29-5778822cae38\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.338910 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-scripts\") pod \"b2c53952-8df2-4f30-ae29-5778822cae38\" (UID: \"b2c53952-8df2-4f30-ae29-5778822cae38\") " Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.339760 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2c53952-8df2-4f30-ae29-5778822cae38-logs" (OuterVolumeSpecName: "logs") pod "b2c53952-8df2-4f30-ae29-5778822cae38" (UID: "b2c53952-8df2-4f30-ae29-5778822cae38"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.346511 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-scripts" (OuterVolumeSpecName: "scripts") pod "b2c53952-8df2-4f30-ae29-5778822cae38" (UID: "b2c53952-8df2-4f30-ae29-5778822cae38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.360297 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c53952-8df2-4f30-ae29-5778822cae38-kube-api-access-hc2t5" (OuterVolumeSpecName: "kube-api-access-hc2t5") pod "b2c53952-8df2-4f30-ae29-5778822cae38" (UID: "b2c53952-8df2-4f30-ae29-5778822cae38"). InnerVolumeSpecName "kube-api-access-hc2t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.394289 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-config-data" (OuterVolumeSpecName: "config-data") pod "b2c53952-8df2-4f30-ae29-5778822cae38" (UID: "b2c53952-8df2-4f30-ae29-5778822cae38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.432174 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2c53952-8df2-4f30-ae29-5778822cae38" (UID: "b2c53952-8df2-4f30-ae29-5778822cae38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.443081 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc2t5\" (UniqueName: \"kubernetes.io/projected/b2c53952-8df2-4f30-ae29-5778822cae38-kube-api-access-hc2t5\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.443147 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.443244 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2c53952-8df2-4f30-ae29-5778822cae38-logs\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.443258 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.443269 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2c53952-8df2-4f30-ae29-5778822cae38-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.589269 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.589348 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.589400 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.590210 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7648edfdb05be717c37ab1637f452b4f2bc8bd71444b3bbf2ede9558e7be04df"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.590982 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://7648edfdb05be717c37ab1637f452b4f2bc8bd71444b3bbf2ede9558e7be04df" gracePeriod=600 Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.853143 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-76rb8" event={"ID":"b2c53952-8df2-4f30-ae29-5778822cae38","Type":"ContainerDied","Data":"a82aa3f2f83a7e94ea65fbc513557beaa7a2052d473d46c663ab4645ee9b734f"} Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.853460 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a82aa3f2f83a7e94ea65fbc513557beaa7a2052d473d46c663ab4645ee9b734f" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.853376 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-76rb8" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.855820 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="7648edfdb05be717c37ab1637f452b4f2bc8bd71444b3bbf2ede9558e7be04df" exitCode=0 Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.855866 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"7648edfdb05be717c37ab1637f452b4f2bc8bd71444b3bbf2ede9558e7be04df"} Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.855914 4680 scope.go:117] "RemoveContainer" containerID="44b711f4401b0e5c2c2810ad8495da08e3e147502d358d88f7938a52612ea1ab" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.871099 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9458w" event={"ID":"f869168f-8063-46a6-b574-d8be569ac733","Type":"ContainerStarted","Data":"402a0262b690fade24798da881d746d742311d70295f6b4d9909081f63083658"} Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.879047 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-854d5d774d-kbx8n" event={"ID":"7370e5a4-158b-44db-8b30-e9201681787a","Type":"ContainerStarted","Data":"e05b55ffdb8b13f2b0bf7bec45fbbef7df952a0abac46898460ceda93bf1a57c"} Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.879085 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-854d5d774d-kbx8n" event={"ID":"7370e5a4-158b-44db-8b30-e9201681787a","Type":"ContainerStarted","Data":"c1153d8818307b1a548db2d100e78f45d8cb6a4b2b56d8933b621238daf6add1"} Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.879277 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.924506 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-9458w" podStartSLOduration=5.557152851 podStartE2EDuration="58.92448475s" podCreationTimestamp="2026-02-17 18:02:07 +0000 UTC" firstStartedPulling="2026-02-17 18:02:10.394259761 +0000 UTC m=+1059.945158440" lastFinishedPulling="2026-02-17 18:03:03.76159165 +0000 UTC m=+1113.312490339" observedRunningTime="2026-02-17 18:03:05.892740824 +0000 UTC m=+1115.443639503" watchObservedRunningTime="2026-02-17 18:03:05.92448475 +0000 UTC m=+1115.475383429" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.936064 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-854d5d774d-kbx8n" podStartSLOduration=2.936046372 podStartE2EDuration="2.936046372s" podCreationTimestamp="2026-02-17 18:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:03:05.920788564 +0000 UTC m=+1115.471687243" watchObservedRunningTime="2026-02-17 18:03:05.936046372 +0000 UTC m=+1115.486945051" Feb 17 18:03:05 crc kubenswrapper[4680]: I0217 18:03:05.939909 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.064729 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6c7f4dffcd-6k6kn"] Feb 17 18:03:06 crc kubenswrapper[4680]: E0217 18:03:06.065610 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c53952-8df2-4f30-ae29-5778822cae38" containerName="placement-db-sync" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.065630 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c53952-8df2-4f30-ae29-5778822cae38" containerName="placement-db-sync" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.065833 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2c53952-8df2-4f30-ae29-5778822cae38" containerName="placement-db-sync" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.067195 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.076617 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.076708 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.076851 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4jzcp" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.076973 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.076991 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.100942 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c7f4dffcd-6k6kn"] Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.154816 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b0173d-34ae-465b-8119-b84227984ccd-combined-ca-bundle\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.154864 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48b0173d-34ae-465b-8119-b84227984ccd-scripts\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.154899 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b0173d-34ae-465b-8119-b84227984ccd-public-tls-certs\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.154945 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48b0173d-34ae-465b-8119-b84227984ccd-logs\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.155036 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b0173d-34ae-465b-8119-b84227984ccd-config-data\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.155092 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwdwv\" (UniqueName: \"kubernetes.io/projected/48b0173d-34ae-465b-8119-b84227984ccd-kube-api-access-xwdwv\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.155172 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b0173d-34ae-465b-8119-b84227984ccd-internal-tls-certs\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.256696 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwdwv\" (UniqueName: \"kubernetes.io/projected/48b0173d-34ae-465b-8119-b84227984ccd-kube-api-access-xwdwv\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.256835 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b0173d-34ae-465b-8119-b84227984ccd-internal-tls-certs\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.256904 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b0173d-34ae-465b-8119-b84227984ccd-combined-ca-bundle\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.256930 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48b0173d-34ae-465b-8119-b84227984ccd-scripts\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.256959 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b0173d-34ae-465b-8119-b84227984ccd-public-tls-certs\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.257033 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48b0173d-34ae-465b-8119-b84227984ccd-logs\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.257147 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b0173d-34ae-465b-8119-b84227984ccd-config-data\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.257678 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48b0173d-34ae-465b-8119-b84227984ccd-logs\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.265473 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48b0173d-34ae-465b-8119-b84227984ccd-scripts\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.266513 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b0173d-34ae-465b-8119-b84227984ccd-config-data\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.267901 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b0173d-34ae-465b-8119-b84227984ccd-internal-tls-certs\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.268294 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b0173d-34ae-465b-8119-b84227984ccd-public-tls-certs\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.284294 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b0173d-34ae-465b-8119-b84227984ccd-combined-ca-bundle\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.295559 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwdwv\" (UniqueName: \"kubernetes.io/projected/48b0173d-34ae-465b-8119-b84227984ccd-kube-api-access-xwdwv\") pod \"placement-6c7f4dffcd-6k6kn\" (UID: \"48b0173d-34ae-465b-8119-b84227984ccd\") " pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.390735 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.795612 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c7f4dffcd-6k6kn"] Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.945741 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c7f4dffcd-6k6kn" event={"ID":"48b0173d-34ae-465b-8119-b84227984ccd","Type":"ContainerStarted","Data":"88ca92e76220ff73840606c630f997765ca5adb62c7ff82c7a3530d63e758e91"} Feb 17 18:03:06 crc kubenswrapper[4680]: I0217 18:03:06.964133 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"ad1d4ed6f92a7e4627e28b3747e3d87e3433b7897d128c386001f1954f052c13"} Feb 17 18:03:07 crc kubenswrapper[4680]: I0217 18:03:07.975369 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c7f4dffcd-6k6kn" event={"ID":"48b0173d-34ae-465b-8119-b84227984ccd","Type":"ContainerStarted","Data":"f544d578597d8256e0fdadc14d31e6490e5dc1abe34ff93adeecef128eca59b7"} Feb 17 18:03:08 crc kubenswrapper[4680]: I0217 18:03:08.987709 4680 generic.go:334] "Generic (PLEG): container finished" podID="a49d620c-c0c2-41cb-b458-cb67351334ad" containerID="f115b6f2712360fedf2655bc1a7af5bb3d800f96bce75a80dcb9d60c9dc3edf5" exitCode=0 Feb 17 18:03:08 crc kubenswrapper[4680]: I0217 18:03:08.987733 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-477hc" event={"ID":"a49d620c-c0c2-41cb-b458-cb67351334ad","Type":"ContainerDied","Data":"f115b6f2712360fedf2655bc1a7af5bb3d800f96bce75a80dcb9d60c9dc3edf5"} Feb 17 18:03:09 crc kubenswrapper[4680]: I0217 18:03:09.202554 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8ccfcccc8-2qwb2" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 17 18:03:09 crc kubenswrapper[4680]: I0217 18:03:09.417409 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6978d4c7cb-4k7vb" podUID="91980b55-1251-4404-a22d-1572dd91ec8a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 17 18:03:09 crc kubenswrapper[4680]: I0217 18:03:09.519395 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 18:03:09 crc kubenswrapper[4680]: I0217 18:03:09.519503 4680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 18:03:09 crc kubenswrapper[4680]: I0217 18:03:09.542353 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 18:03:10 crc kubenswrapper[4680]: I0217 18:03:10.040163 4680 generic.go:334] "Generic (PLEG): container finished" podID="dce4bc8a-e4f7-4716-abc6-cb70c238f4cc" containerID="6f25aa11e0c3556a1b98a0ccdf89d777fde4b0e9166e86c6f664f7ceb38d94ae" exitCode=0 Feb 17 18:03:10 crc kubenswrapper[4680]: I0217 18:03:10.041267 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bn2nz" event={"ID":"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc","Type":"ContainerDied","Data":"6f25aa11e0c3556a1b98a0ccdf89d777fde4b0e9166e86c6f664f7ceb38d94ae"} Feb 17 18:03:10 crc kubenswrapper[4680]: I0217 18:03:10.793910 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.696596 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-477hc" Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.720853 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bn2nz" Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.786388 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpgx4\" (UniqueName: \"kubernetes.io/projected/a49d620c-c0c2-41cb-b458-cb67351334ad-kube-api-access-wpgx4\") pod \"a49d620c-c0c2-41cb-b458-cb67351334ad\" (UID: \"a49d620c-c0c2-41cb-b458-cb67351334ad\") " Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.786611 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a49d620c-c0c2-41cb-b458-cb67351334ad-db-sync-config-data\") pod \"a49d620c-c0c2-41cb-b458-cb67351334ad\" (UID: \"a49d620c-c0c2-41cb-b458-cb67351334ad\") " Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.786639 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-config\") pod \"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc\" (UID: \"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc\") " Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.786659 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a49d620c-c0c2-41cb-b458-cb67351334ad-combined-ca-bundle\") pod \"a49d620c-c0c2-41cb-b458-cb67351334ad\" (UID: \"a49d620c-c0c2-41cb-b458-cb67351334ad\") " Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.786679 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6wx6\" (UniqueName: \"kubernetes.io/projected/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-kube-api-access-l6wx6\") pod \"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc\" (UID: \"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc\") " Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.788806 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-combined-ca-bundle\") pod \"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc\" (UID: \"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc\") " Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.812605 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a49d620c-c0c2-41cb-b458-cb67351334ad-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a49d620c-c0c2-41cb-b458-cb67351334ad" (UID: "a49d620c-c0c2-41cb-b458-cb67351334ad"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.832962 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a49d620c-c0c2-41cb-b458-cb67351334ad-kube-api-access-wpgx4" (OuterVolumeSpecName: "kube-api-access-wpgx4") pod "a49d620c-c0c2-41cb-b458-cb67351334ad" (UID: "a49d620c-c0c2-41cb-b458-cb67351334ad"). InnerVolumeSpecName "kube-api-access-wpgx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.835010 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-kube-api-access-l6wx6" (OuterVolumeSpecName: "kube-api-access-l6wx6") pod "dce4bc8a-e4f7-4716-abc6-cb70c238f4cc" (UID: "dce4bc8a-e4f7-4716-abc6-cb70c238f4cc"). InnerVolumeSpecName "kube-api-access-l6wx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.847541 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a49d620c-c0c2-41cb-b458-cb67351334ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a49d620c-c0c2-41cb-b458-cb67351334ad" (UID: "a49d620c-c0c2-41cb-b458-cb67351334ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.847644 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-config" (OuterVolumeSpecName: "config") pod "dce4bc8a-e4f7-4716-abc6-cb70c238f4cc" (UID: "dce4bc8a-e4f7-4716-abc6-cb70c238f4cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.871828 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dce4bc8a-e4f7-4716-abc6-cb70c238f4cc" (UID: "dce4bc8a-e4f7-4716-abc6-cb70c238f4cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.891072 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.891124 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpgx4\" (UniqueName: \"kubernetes.io/projected/a49d620c-c0c2-41cb-b458-cb67351334ad-kube-api-access-wpgx4\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.891140 4680 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a49d620c-c0c2-41cb-b458-cb67351334ad-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.891197 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.891209 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a49d620c-c0c2-41cb-b458-cb67351334ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:11 crc kubenswrapper[4680]: I0217 18:03:11.891222 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6wx6\" (UniqueName: \"kubernetes.io/projected/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc-kube-api-access-l6wx6\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.091245 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c7f4dffcd-6k6kn" event={"ID":"48b0173d-34ae-465b-8119-b84227984ccd","Type":"ContainerStarted","Data":"d1f9120d7912f5f8b04f0468dbd68beddb81d420b40940723a6d11d55466a998"} Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.092037 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.092431 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.104562 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-6c7f4dffcd-6k6kn" podUID="48b0173d-34ae-465b-8119-b84227984ccd" containerName="placement-log" probeResult="failure" output="Get \"https://10.217.0.154:8778/\": dial tcp 10.217.0.154:8778: connect: connection refused" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.104719 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bn2nz" event={"ID":"dce4bc8a-e4f7-4716-abc6-cb70c238f4cc","Type":"ContainerDied","Data":"03b4fb41b2bec6e1fe83bff14ae0278f36ee6aabf7a42ec9ab1372f6acfbea74"} Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.104753 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03b4fb41b2bec6e1fe83bff14ae0278f36ee6aabf7a42ec9ab1372f6acfbea74" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.104837 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bn2nz" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.115576 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-477hc" event={"ID":"a49d620c-c0c2-41cb-b458-cb67351334ad","Type":"ContainerDied","Data":"85672a985476f03f15f3499fc791e704fa24e518b6c3032214057291ce8f1108"} Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.115630 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85672a985476f03f15f3499fc791e704fa24e518b6c3032214057291ce8f1108" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.115695 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-477hc" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.129652 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6c7f4dffcd-6k6kn" podStartSLOduration=6.129629602 podStartE2EDuration="6.129629602s" podCreationTimestamp="2026-02-17 18:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:03:12.12352195 +0000 UTC m=+1121.674420649" watchObservedRunningTime="2026-02-17 18:03:12.129629602 +0000 UTC m=+1121.680528281" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.353004 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-jt5mk"] Feb 17 18:03:12 crc kubenswrapper[4680]: E0217 18:03:12.356810 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce4bc8a-e4f7-4716-abc6-cb70c238f4cc" containerName="neutron-db-sync" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.356843 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce4bc8a-e4f7-4716-abc6-cb70c238f4cc" containerName="neutron-db-sync" Feb 17 18:03:12 crc kubenswrapper[4680]: E0217 18:03:12.356884 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49d620c-c0c2-41cb-b458-cb67351334ad" containerName="barbican-db-sync" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.356893 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49d620c-c0c2-41cb-b458-cb67351334ad" containerName="barbican-db-sync" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.357259 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="a49d620c-c0c2-41cb-b458-cb67351334ad" containerName="barbican-db-sync" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.357298 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce4bc8a-e4f7-4716-abc6-cb70c238f4cc" containerName="neutron-db-sync" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.362946 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.395501 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-jt5mk"] Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.403054 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.403171 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmnq8\" (UniqueName: \"kubernetes.io/projected/367c8ec3-d2e9-457a-b057-179da3568946-kube-api-access-wmnq8\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.403219 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.403260 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-dns-svc\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.403377 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-config\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.403478 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.510135 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.510248 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.510310 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmnq8\" (UniqueName: \"kubernetes.io/projected/367c8ec3-d2e9-457a-b057-179da3568946-kube-api-access-wmnq8\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.510357 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.510395 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-dns-svc\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.510499 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-config\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.511704 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-config\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.514308 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.514398 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.527082 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-dns-svc\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.531267 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.548457 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmnq8\" (UniqueName: \"kubernetes.io/projected/367c8ec3-d2e9-457a-b057-179da3568946-kube-api-access-wmnq8\") pod \"dnsmasq-dns-55f844cf75-jt5mk\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.738930 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7cfbd9cd46-qsg4t"] Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.740395 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.746893 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.747163 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cvxzz" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.747333 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.747476 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.760987 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7cfbd9cd46-qsg4t"] Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.775060 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.827830 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-combined-ca-bundle\") pod \"neutron-7cfbd9cd46-qsg4t\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.827879 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-config\") pod \"neutron-7cfbd9cd46-qsg4t\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.827950 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-httpd-config\") pod \"neutron-7cfbd9cd46-qsg4t\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.828195 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-ovndb-tls-certs\") pod \"neutron-7cfbd9cd46-qsg4t\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.828238 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r97k\" (UniqueName: \"kubernetes.io/projected/be0e9165-8f19-438f-a422-eb09778329be-kube-api-access-9r97k\") pod \"neutron-7cfbd9cd46-qsg4t\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.930029 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r97k\" (UniqueName: \"kubernetes.io/projected/be0e9165-8f19-438f-a422-eb09778329be-kube-api-access-9r97k\") pod \"neutron-7cfbd9cd46-qsg4t\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.930237 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-combined-ca-bundle\") pod \"neutron-7cfbd9cd46-qsg4t\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.930257 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-config\") pod \"neutron-7cfbd9cd46-qsg4t\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.930304 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-httpd-config\") pod \"neutron-7cfbd9cd46-qsg4t\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.930427 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-ovndb-tls-certs\") pod \"neutron-7cfbd9cd46-qsg4t\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.943448 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-combined-ca-bundle\") pod \"neutron-7cfbd9cd46-qsg4t\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.945142 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-config\") pod \"neutron-7cfbd9cd46-qsg4t\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.948971 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-ovndb-tls-certs\") pod \"neutron-7cfbd9cd46-qsg4t\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:12 crc kubenswrapper[4680]: I0217 18:03:12.949408 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-httpd-config\") pod \"neutron-7cfbd9cd46-qsg4t\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.002755 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r97k\" (UniqueName: \"kubernetes.io/projected/be0e9165-8f19-438f-a422-eb09778329be-kube-api-access-9r97k\") pod \"neutron-7cfbd9cd46-qsg4t\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.171245 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.239445 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6b7bf9567f-k8kfh"] Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.243031 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6b7bf9567f-k8kfh"] Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.243142 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b7bf9567f-k8kfh" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.262427 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-8474964c54-bvtdt"] Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.264247 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.270488 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.270710 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-np774" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.271017 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.281871 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.299015 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8474964c54-bvtdt"] Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.339533 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-jt5mk"] Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.358469 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd6rl\" (UniqueName: \"kubernetes.io/projected/6ccb1c12-6633-49c0-879c-5eaa29017991-kube-api-access-gd6rl\") pod \"barbican-worker-6b7bf9567f-k8kfh\" (UID: \"6ccb1c12-6633-49c0-879c-5eaa29017991\") " pod="openstack/barbican-worker-6b7bf9567f-k8kfh" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.358515 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27ae5442-486e-406a-ba33-bfcbe792792f-logs\") pod \"barbican-keystone-listener-8474964c54-bvtdt\" (UID: \"27ae5442-486e-406a-ba33-bfcbe792792f\") " pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.358592 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27ae5442-486e-406a-ba33-bfcbe792792f-config-data\") pod \"barbican-keystone-listener-8474964c54-bvtdt\" (UID: \"27ae5442-486e-406a-ba33-bfcbe792792f\") " pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.358638 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ccb1c12-6633-49c0-879c-5eaa29017991-config-data-custom\") pod \"barbican-worker-6b7bf9567f-k8kfh\" (UID: \"6ccb1c12-6633-49c0-879c-5eaa29017991\") " pod="openstack/barbican-worker-6b7bf9567f-k8kfh" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.358661 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4qx8\" (UniqueName: \"kubernetes.io/projected/27ae5442-486e-406a-ba33-bfcbe792792f-kube-api-access-p4qx8\") pod \"barbican-keystone-listener-8474964c54-bvtdt\" (UID: \"27ae5442-486e-406a-ba33-bfcbe792792f\") " pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.358722 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ae5442-486e-406a-ba33-bfcbe792792f-combined-ca-bundle\") pod \"barbican-keystone-listener-8474964c54-bvtdt\" (UID: \"27ae5442-486e-406a-ba33-bfcbe792792f\") " pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.358744 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ccb1c12-6633-49c0-879c-5eaa29017991-config-data\") pod \"barbican-worker-6b7bf9567f-k8kfh\" (UID: \"6ccb1c12-6633-49c0-879c-5eaa29017991\") " pod="openstack/barbican-worker-6b7bf9567f-k8kfh" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.358807 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27ae5442-486e-406a-ba33-bfcbe792792f-config-data-custom\") pod \"barbican-keystone-listener-8474964c54-bvtdt\" (UID: \"27ae5442-486e-406a-ba33-bfcbe792792f\") " pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.358902 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ccb1c12-6633-49c0-879c-5eaa29017991-combined-ca-bundle\") pod \"barbican-worker-6b7bf9567f-k8kfh\" (UID: \"6ccb1c12-6633-49c0-879c-5eaa29017991\") " pod="openstack/barbican-worker-6b7bf9567f-k8kfh" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.358957 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ccb1c12-6633-49c0-879c-5eaa29017991-logs\") pod \"barbican-worker-6b7bf9567f-k8kfh\" (UID: \"6ccb1c12-6633-49c0-879c-5eaa29017991\") " pod="openstack/barbican-worker-6b7bf9567f-k8kfh" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.362872 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-kq6wj"] Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.364972 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.376958 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-kq6wj"] Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.449855 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-786b6cd966-svbnn"] Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.451827 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.460464 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ae5442-486e-406a-ba33-bfcbe792792f-combined-ca-bundle\") pod \"barbican-keystone-listener-8474964c54-bvtdt\" (UID: \"27ae5442-486e-406a-ba33-bfcbe792792f\") " pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.460520 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ccb1c12-6633-49c0-879c-5eaa29017991-config-data\") pod \"barbican-worker-6b7bf9567f-k8kfh\" (UID: \"6ccb1c12-6633-49c0-879c-5eaa29017991\") " pod="openstack/barbican-worker-6b7bf9567f-k8kfh" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.460542 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27ae5442-486e-406a-ba33-bfcbe792792f-config-data-custom\") pod \"barbican-keystone-listener-8474964c54-bvtdt\" (UID: \"27ae5442-486e-406a-ba33-bfcbe792792f\") " pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.460685 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-dns-svc\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.460708 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.460736 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.460763 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.460796 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ccb1c12-6633-49c0-879c-5eaa29017991-combined-ca-bundle\") pod \"barbican-worker-6b7bf9567f-k8kfh\" (UID: \"6ccb1c12-6633-49c0-879c-5eaa29017991\") " pod="openstack/barbican-worker-6b7bf9567f-k8kfh" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.460830 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ccb1c12-6633-49c0-879c-5eaa29017991-logs\") pod \"barbican-worker-6b7bf9567f-k8kfh\" (UID: \"6ccb1c12-6633-49c0-879c-5eaa29017991\") " pod="openstack/barbican-worker-6b7bf9567f-k8kfh" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.460852 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd6rl\" (UniqueName: \"kubernetes.io/projected/6ccb1c12-6633-49c0-879c-5eaa29017991-kube-api-access-gd6rl\") pod \"barbican-worker-6b7bf9567f-k8kfh\" (UID: \"6ccb1c12-6633-49c0-879c-5eaa29017991\") " pod="openstack/barbican-worker-6b7bf9567f-k8kfh" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.460870 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27ae5442-486e-406a-ba33-bfcbe792792f-logs\") pod \"barbican-keystone-listener-8474964c54-bvtdt\" (UID: \"27ae5442-486e-406a-ba33-bfcbe792792f\") " pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.460891 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxnp7\" (UniqueName: \"kubernetes.io/projected/6807c66c-2b59-4822-a455-7eda561b742f-kube-api-access-qxnp7\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.460915 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-config\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.460963 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27ae5442-486e-406a-ba33-bfcbe792792f-config-data\") pod \"barbican-keystone-listener-8474964c54-bvtdt\" (UID: \"27ae5442-486e-406a-ba33-bfcbe792792f\") " pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.460985 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ccb1c12-6633-49c0-879c-5eaa29017991-config-data-custom\") pod \"barbican-worker-6b7bf9567f-k8kfh\" (UID: \"6ccb1c12-6633-49c0-879c-5eaa29017991\") " pod="openstack/barbican-worker-6b7bf9567f-k8kfh" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.461009 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4qx8\" (UniqueName: \"kubernetes.io/projected/27ae5442-486e-406a-ba33-bfcbe792792f-kube-api-access-p4qx8\") pod \"barbican-keystone-listener-8474964c54-bvtdt\" (UID: \"27ae5442-486e-406a-ba33-bfcbe792792f\") " pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.471986 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.472602 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ccb1c12-6633-49c0-879c-5eaa29017991-logs\") pod \"barbican-worker-6b7bf9567f-k8kfh\" (UID: \"6ccb1c12-6633-49c0-879c-5eaa29017991\") " pod="openstack/barbican-worker-6b7bf9567f-k8kfh" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.473131 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27ae5442-486e-406a-ba33-bfcbe792792f-logs\") pod \"barbican-keystone-listener-8474964c54-bvtdt\" (UID: \"27ae5442-486e-406a-ba33-bfcbe792792f\") " pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.474105 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-786b6cd966-svbnn"] Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.481021 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27ae5442-486e-406a-ba33-bfcbe792792f-config-data-custom\") pod \"barbican-keystone-listener-8474964c54-bvtdt\" (UID: \"27ae5442-486e-406a-ba33-bfcbe792792f\") " pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.483933 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27ae5442-486e-406a-ba33-bfcbe792792f-config-data\") pod \"barbican-keystone-listener-8474964c54-bvtdt\" (UID: \"27ae5442-486e-406a-ba33-bfcbe792792f\") " pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.484235 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ccb1c12-6633-49c0-879c-5eaa29017991-combined-ca-bundle\") pod \"barbican-worker-6b7bf9567f-k8kfh\" (UID: \"6ccb1c12-6633-49c0-879c-5eaa29017991\") " pod="openstack/barbican-worker-6b7bf9567f-k8kfh" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.485290 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ccb1c12-6633-49c0-879c-5eaa29017991-config-data\") pod \"barbican-worker-6b7bf9567f-k8kfh\" (UID: \"6ccb1c12-6633-49c0-879c-5eaa29017991\") " pod="openstack/barbican-worker-6b7bf9567f-k8kfh" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.489910 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ccb1c12-6633-49c0-879c-5eaa29017991-config-data-custom\") pod \"barbican-worker-6b7bf9567f-k8kfh\" (UID: \"6ccb1c12-6633-49c0-879c-5eaa29017991\") " pod="openstack/barbican-worker-6b7bf9567f-k8kfh" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.518434 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ae5442-486e-406a-ba33-bfcbe792792f-combined-ca-bundle\") pod \"barbican-keystone-listener-8474964c54-bvtdt\" (UID: \"27ae5442-486e-406a-ba33-bfcbe792792f\") " pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.522848 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4qx8\" (UniqueName: \"kubernetes.io/projected/27ae5442-486e-406a-ba33-bfcbe792792f-kube-api-access-p4qx8\") pod \"barbican-keystone-listener-8474964c54-bvtdt\" (UID: \"27ae5442-486e-406a-ba33-bfcbe792792f\") " pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.529582 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd6rl\" (UniqueName: \"kubernetes.io/projected/6ccb1c12-6633-49c0-879c-5eaa29017991-kube-api-access-gd6rl\") pod \"barbican-worker-6b7bf9567f-k8kfh\" (UID: \"6ccb1c12-6633-49c0-879c-5eaa29017991\") " pod="openstack/barbican-worker-6b7bf9567f-k8kfh" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.565956 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxnp7\" (UniqueName: \"kubernetes.io/projected/6807c66c-2b59-4822-a455-7eda561b742f-kube-api-access-qxnp7\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.566008 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-config\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.566048 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-config-data\") pod \"barbican-api-786b6cd966-svbnn\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.566115 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-combined-ca-bundle\") pod \"barbican-api-786b6cd966-svbnn\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.566173 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-logs\") pod \"barbican-api-786b6cd966-svbnn\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.566203 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-dns-svc\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.566223 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.566239 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-config-data-custom\") pod \"barbican-api-786b6cd966-svbnn\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.566266 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.566288 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.566350 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkdhh\" (UniqueName: \"kubernetes.io/projected/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-kube-api-access-kkdhh\") pod \"barbican-api-786b6cd966-svbnn\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.568038 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-dns-svc\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.568746 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.570741 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.577548 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.577765 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-config\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.594330 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxnp7\" (UniqueName: \"kubernetes.io/projected/6807c66c-2b59-4822-a455-7eda561b742f-kube-api-access-qxnp7\") pod \"dnsmasq-dns-85ff748b95-kq6wj\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.617117 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b7bf9567f-k8kfh" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.649744 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.668625 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-combined-ca-bundle\") pod \"barbican-api-786b6cd966-svbnn\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.668963 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-logs\") pod \"barbican-api-786b6cd966-svbnn\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.669084 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-config-data-custom\") pod \"barbican-api-786b6cd966-svbnn\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.669267 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkdhh\" (UniqueName: \"kubernetes.io/projected/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-kube-api-access-kkdhh\") pod \"barbican-api-786b6cd966-svbnn\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.669467 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-config-data\") pod \"barbican-api-786b6cd966-svbnn\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.670206 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-logs\") pod \"barbican-api-786b6cd966-svbnn\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.674349 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-config-data\") pod \"barbican-api-786b6cd966-svbnn\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.693305 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.755293 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-combined-ca-bundle\") pod \"barbican-api-786b6cd966-svbnn\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.756894 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-config-data-custom\") pod \"barbican-api-786b6cd966-svbnn\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.761140 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkdhh\" (UniqueName: \"kubernetes.io/projected/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-kube-api-access-kkdhh\") pod \"barbican-api-786b6cd966-svbnn\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:13 crc kubenswrapper[4680]: I0217 18:03:13.945948 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:15 crc kubenswrapper[4680]: I0217 18:03:15.848785 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59fd4ccc67-hf8zf"] Feb 17 18:03:15 crc kubenswrapper[4680]: I0217 18:03:15.853896 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:15 crc kubenswrapper[4680]: I0217 18:03:15.865575 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 17 18:03:15 crc kubenswrapper[4680]: I0217 18:03:15.871604 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 17 18:03:15 crc kubenswrapper[4680]: I0217 18:03:15.935458 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59fd4ccc67-hf8zf"] Feb 17 18:03:15 crc kubenswrapper[4680]: I0217 18:03:15.943262 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-ovndb-tls-certs\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:15 crc kubenswrapper[4680]: I0217 18:03:15.943592 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-internal-tls-certs\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:15 crc kubenswrapper[4680]: I0217 18:03:15.943980 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-public-tls-certs\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:15 crc kubenswrapper[4680]: I0217 18:03:15.944118 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-httpd-config\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:15 crc kubenswrapper[4680]: I0217 18:03:15.944263 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgb5v\" (UniqueName: \"kubernetes.io/projected/c24ed534-12dd-4bab-aac1-3ef792cd8e01-kube-api-access-mgb5v\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:15 crc kubenswrapper[4680]: I0217 18:03:15.944536 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-config\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:15 crc kubenswrapper[4680]: I0217 18:03:15.944651 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-combined-ca-bundle\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:16 crc kubenswrapper[4680]: I0217 18:03:16.046897 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-config\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:16 crc kubenswrapper[4680]: I0217 18:03:16.048035 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-combined-ca-bundle\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:16 crc kubenswrapper[4680]: I0217 18:03:16.048273 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-ovndb-tls-certs\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:16 crc kubenswrapper[4680]: I0217 18:03:16.048434 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-internal-tls-certs\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:16 crc kubenswrapper[4680]: I0217 18:03:16.048673 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-public-tls-certs\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:16 crc kubenswrapper[4680]: I0217 18:03:16.048777 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-httpd-config\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:16 crc kubenswrapper[4680]: I0217 18:03:16.052517 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgb5v\" (UniqueName: \"kubernetes.io/projected/c24ed534-12dd-4bab-aac1-3ef792cd8e01-kube-api-access-mgb5v\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:16 crc kubenswrapper[4680]: I0217 18:03:16.060501 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-internal-tls-certs\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:16 crc kubenswrapper[4680]: I0217 18:03:16.061267 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-combined-ca-bundle\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:16 crc kubenswrapper[4680]: I0217 18:03:16.061632 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-ovndb-tls-certs\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:16 crc kubenswrapper[4680]: I0217 18:03:16.068941 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-public-tls-certs\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:16 crc kubenswrapper[4680]: I0217 18:03:16.070333 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-httpd-config\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:16 crc kubenswrapper[4680]: I0217 18:03:16.076211 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c24ed534-12dd-4bab-aac1-3ef792cd8e01-config\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:16 crc kubenswrapper[4680]: I0217 18:03:16.094423 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgb5v\" (UniqueName: \"kubernetes.io/projected/c24ed534-12dd-4bab-aac1-3ef792cd8e01-kube-api-access-mgb5v\") pod \"neutron-59fd4ccc67-hf8zf\" (UID: \"c24ed534-12dd-4bab-aac1-3ef792cd8e01\") " pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:16 crc kubenswrapper[4680]: I0217 18:03:16.261606 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:17 crc kubenswrapper[4680]: I0217 18:03:17.773444 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:19 crc kubenswrapper[4680]: I0217 18:03:19.197502 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-jt5mk"] Feb 17 18:03:19 crc kubenswrapper[4680]: I0217 18:03:19.268152 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8ccfcccc8-2qwb2" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 17 18:03:19 crc kubenswrapper[4680]: I0217 18:03:19.321904 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8474964c54-bvtdt"] Feb 17 18:03:19 crc kubenswrapper[4680]: I0217 18:03:19.353311 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" event={"ID":"27ae5442-486e-406a-ba33-bfcbe792792f","Type":"ContainerStarted","Data":"3b683b5dbe6e98173601d27a9ffa2e56c36e398093f7f7cc45eb3f1313e78b9c"} Feb 17 18:03:19 crc kubenswrapper[4680]: I0217 18:03:19.388726 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f5f74b1-9667-421c-94e6-484208c36f55","Type":"ContainerStarted","Data":"1950124cbfede488894c7094f1924801f760a455c9bfec4648bbad9c4e82e9bd"} Feb 17 18:03:19 crc kubenswrapper[4680]: I0217 18:03:19.394012 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" event={"ID":"367c8ec3-d2e9-457a-b057-179da3568946","Type":"ContainerStarted","Data":"7f05b3e1d1eaf2c883bf1a4cd06141b1f36569004bdcf88d3189108b47a0eff2"} Feb 17 18:03:19 crc kubenswrapper[4680]: I0217 18:03:19.417483 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6978d4c7cb-4k7vb" podUID="91980b55-1251-4404-a22d-1572dd91ec8a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 17 18:03:19 crc kubenswrapper[4680]: I0217 18:03:19.426595 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-786b6cd966-svbnn"] Feb 17 18:03:19 crc kubenswrapper[4680]: I0217 18:03:19.467514 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-kq6wj"] Feb 17 18:03:19 crc kubenswrapper[4680]: I0217 18:03:19.796574 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7cfbd9cd46-qsg4t"] Feb 17 18:03:19 crc kubenswrapper[4680]: I0217 18:03:19.857363 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c7f4dffcd-6k6kn" Feb 17 18:03:19 crc kubenswrapper[4680]: I0217 18:03:19.877628 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6b7bf9567f-k8kfh"] Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.018473 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59fd4ccc67-hf8zf"] Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.466702 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cfbd9cd46-qsg4t" event={"ID":"be0e9165-8f19-438f-a422-eb09778329be","Type":"ContainerStarted","Data":"02b9ab7a9af5c8fa6cb72f82609c11288d9fa2dd66d2b2518e3e3be36cfabb87"} Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.467063 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cfbd9cd46-qsg4t" event={"ID":"be0e9165-8f19-438f-a422-eb09778329be","Type":"ContainerStarted","Data":"c42ff3fac53ae68e782ee04dfbeb465ecfdfebd1e63bdf3ab97997bb74b50f20"} Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.470713 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b7bf9567f-k8kfh" event={"ID":"6ccb1c12-6633-49c0-879c-5eaa29017991","Type":"ContainerStarted","Data":"7e51564f0b84d091d8c682147bf662e3f08204400577dec707ff0aad3c66bf4e"} Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.520641 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59fd4ccc67-hf8zf" event={"ID":"c24ed534-12dd-4bab-aac1-3ef792cd8e01","Type":"ContainerStarted","Data":"92fc80aafdcac686b86e770ce06d709876771c23a98026e1406b36edd56539ad"} Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.524038 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786b6cd966-svbnn" event={"ID":"7f41da1e-3f88-41c7-ab31-f3efdd7790c9","Type":"ContainerStarted","Data":"2b4f12123c5b7cc2505e3fb697f2641ea1b4fdbcffa172b0a519c570e915c859"} Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.524089 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786b6cd966-svbnn" event={"ID":"7f41da1e-3f88-41c7-ab31-f3efdd7790c9","Type":"ContainerStarted","Data":"282b0c8b16dbe8c26600cef43b182b2463c51ac9c06f546093896fddfaf64a8e"} Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.524103 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786b6cd966-svbnn" event={"ID":"7f41da1e-3f88-41c7-ab31-f3efdd7790c9","Type":"ContainerStarted","Data":"b3c4ed2d7901ef4768b42fa658b3cd4497b1a922b0ab45260a8f66ec7f59c3fa"} Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.529144 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.529295 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.555920 4680 generic.go:334] "Generic (PLEG): container finished" podID="6807c66c-2b59-4822-a455-7eda561b742f" containerID="d5ab37b1712df0adb56cb8965c8e3ed11e6b9e305abca4cb62b47957fd7bc517" exitCode=0 Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.555983 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" event={"ID":"6807c66c-2b59-4822-a455-7eda561b742f","Type":"ContainerDied","Data":"d5ab37b1712df0adb56cb8965c8e3ed11e6b9e305abca4cb62b47957fd7bc517"} Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.556008 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" event={"ID":"6807c66c-2b59-4822-a455-7eda561b742f","Type":"ContainerStarted","Data":"4e9153c625b76c779fb6f60f9c034da73c825b6f1f49adeec6ce084b8bab7379"} Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.576103 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-786b6cd966-svbnn" podStartSLOduration=7.576085084 podStartE2EDuration="7.576085084s" podCreationTimestamp="2026-02-17 18:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:03:20.571721307 +0000 UTC m=+1130.122619986" watchObservedRunningTime="2026-02-17 18:03:20.576085084 +0000 UTC m=+1130.126983763" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.578938 4680 generic.go:334] "Generic (PLEG): container finished" podID="367c8ec3-d2e9-457a-b057-179da3568946" containerID="2df5f88b7eee4dbdc0a41cca668d126b28546ff8ca8987e5adee96719a192e19" exitCode=0 Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.579061 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" event={"ID":"367c8ec3-d2e9-457a-b057-179da3568946","Type":"ContainerDied","Data":"2df5f88b7eee4dbdc0a41cca668d126b28546ff8ca8987e5adee96719a192e19"} Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.653010 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6dc475ddf4-vbkll"] Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.674443 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.677037 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.689654 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.701043 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dc475ddf4-vbkll"] Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.795012 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-config-data\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.795374 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-internal-tls-certs\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.795541 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-logs\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.803930 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-config-data-custom\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.808857 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-combined-ca-bundle\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.819512 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpppx\" (UniqueName: \"kubernetes.io/projected/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-kube-api-access-fpppx\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.819851 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-public-tls-certs\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.922190 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-logs\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.922243 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-config-data-custom\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.922308 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-combined-ca-bundle\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.922385 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpppx\" (UniqueName: \"kubernetes.io/projected/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-kube-api-access-fpppx\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.922431 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-public-tls-certs\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.922512 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-config-data\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.922534 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-internal-tls-certs\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.927575 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-logs\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.933256 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-combined-ca-bundle\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.947533 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-config-data-custom\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.948205 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-config-data\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.949009 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-internal-tls-certs\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.957592 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-public-tls-certs\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:20 crc kubenswrapper[4680]: I0217 18:03:20.970452 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpppx\" (UniqueName: \"kubernetes.io/projected/a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf-kube-api-access-fpppx\") pod \"barbican-api-6dc475ddf4-vbkll\" (UID: \"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf\") " pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.081522 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.326174 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.463853 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-dns-svc\") pod \"367c8ec3-d2e9-457a-b057-179da3568946\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.463966 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-ovsdbserver-nb\") pod \"367c8ec3-d2e9-457a-b057-179da3568946\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.464078 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmnq8\" (UniqueName: \"kubernetes.io/projected/367c8ec3-d2e9-457a-b057-179da3568946-kube-api-access-wmnq8\") pod \"367c8ec3-d2e9-457a-b057-179da3568946\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.464107 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-dns-swift-storage-0\") pod \"367c8ec3-d2e9-457a-b057-179da3568946\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.464274 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-config\") pod \"367c8ec3-d2e9-457a-b057-179da3568946\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.464383 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-ovsdbserver-sb\") pod \"367c8ec3-d2e9-457a-b057-179da3568946\" (UID: \"367c8ec3-d2e9-457a-b057-179da3568946\") " Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.514774 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/367c8ec3-d2e9-457a-b057-179da3568946-kube-api-access-wmnq8" (OuterVolumeSpecName: "kube-api-access-wmnq8") pod "367c8ec3-d2e9-457a-b057-179da3568946" (UID: "367c8ec3-d2e9-457a-b057-179da3568946"). InnerVolumeSpecName "kube-api-access-wmnq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.535028 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "367c8ec3-d2e9-457a-b057-179da3568946" (UID: "367c8ec3-d2e9-457a-b057-179da3568946"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.540790 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "367c8ec3-d2e9-457a-b057-179da3568946" (UID: "367c8ec3-d2e9-457a-b057-179da3568946"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.541148 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "367c8ec3-d2e9-457a-b057-179da3568946" (UID: "367c8ec3-d2e9-457a-b057-179da3568946"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.569266 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.569306 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.569338 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmnq8\" (UniqueName: \"kubernetes.io/projected/367c8ec3-d2e9-457a-b057-179da3568946-kube-api-access-wmnq8\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.569351 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.586148 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-config" (OuterVolumeSpecName: "config") pod "367c8ec3-d2e9-457a-b057-179da3568946" (UID: "367c8ec3-d2e9-457a-b057-179da3568946"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.591554 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "367c8ec3-d2e9-457a-b057-179da3568946" (UID: "367c8ec3-d2e9-457a-b057-179da3568946"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.636279 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cfbd9cd46-qsg4t" event={"ID":"be0e9165-8f19-438f-a422-eb09778329be","Type":"ContainerStarted","Data":"98cfbb44b49ce52953de041f8e9706e8267bf17c6b93a04705aa6e2f5b08c269"} Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.637616 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.655629 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59fd4ccc67-hf8zf" event={"ID":"c24ed534-12dd-4bab-aac1-3ef792cd8e01","Type":"ContainerStarted","Data":"05bddc6aa2cc4f729709b99c480c17f7942ee48efe416da74de220dcc30a657b"} Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.655684 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59fd4ccc67-hf8zf" event={"ID":"c24ed534-12dd-4bab-aac1-3ef792cd8e01","Type":"ContainerStarted","Data":"f6e0b4d85176003b0e019cc778ed4b94cb8dc70f3b6219fbc398eb2a7635edd0"} Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.656588 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.670885 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" event={"ID":"6807c66c-2b59-4822-a455-7eda561b742f","Type":"ContainerStarted","Data":"35382ab69449b63b061b83f6ddd73aeff199ca205c5953215ae0dec0eb6f6f0f"} Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.671447 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.671774 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.671805 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/367c8ec3-d2e9-457a-b057-179da3568946-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.680295 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.680357 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-jt5mk" event={"ID":"367c8ec3-d2e9-457a-b057-179da3568946","Type":"ContainerDied","Data":"7f05b3e1d1eaf2c883bf1a4cd06141b1f36569004bdcf88d3189108b47a0eff2"} Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.680402 4680 scope.go:117] "RemoveContainer" containerID="2df5f88b7eee4dbdc0a41cca668d126b28546ff8ca8987e5adee96719a192e19" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.691202 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7cfbd9cd46-qsg4t" podStartSLOduration=9.691179665 podStartE2EDuration="9.691179665s" podCreationTimestamp="2026-02-17 18:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:03:21.679706985 +0000 UTC m=+1131.230605674" watchObservedRunningTime="2026-02-17 18:03:21.691179665 +0000 UTC m=+1131.242078344" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.739712 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59fd4ccc67-hf8zf" podStartSLOduration=6.739689956 podStartE2EDuration="6.739689956s" podCreationTimestamp="2026-02-17 18:03:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:03:21.727137073 +0000 UTC m=+1131.278035782" watchObservedRunningTime="2026-02-17 18:03:21.739689956 +0000 UTC m=+1131.290588635" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.772604 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" podStartSLOduration=8.772586739 podStartE2EDuration="8.772586739s" podCreationTimestamp="2026-02-17 18:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:03:21.764269758 +0000 UTC m=+1131.315168437" watchObservedRunningTime="2026-02-17 18:03:21.772586739 +0000 UTC m=+1131.323485418" Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.877876 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-jt5mk"] Feb 17 18:03:21 crc kubenswrapper[4680]: I0217 18:03:21.901377 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-jt5mk"] Feb 17 18:03:22 crc kubenswrapper[4680]: I0217 18:03:22.343419 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dc475ddf4-vbkll"] Feb 17 18:03:22 crc kubenswrapper[4680]: W0217 18:03:22.376923 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda71f4bc8_fabb_47ea_abf6_4c335bc8bdbf.slice/crio-4cad472be4a4f9f5ed542849b0186e418d564b42e51bcb88b7772d4dba6430b4 WatchSource:0}: Error finding container 4cad472be4a4f9f5ed542849b0186e418d564b42e51bcb88b7772d4dba6430b4: Status 404 returned error can't find the container with id 4cad472be4a4f9f5ed542849b0186e418d564b42e51bcb88b7772d4dba6430b4 Feb 17 18:03:22 crc kubenswrapper[4680]: I0217 18:03:22.703289 4680 generic.go:334] "Generic (PLEG): container finished" podID="f869168f-8063-46a6-b574-d8be569ac733" containerID="402a0262b690fade24798da881d746d742311d70295f6b4d9909081f63083658" exitCode=0 Feb 17 18:03:22 crc kubenswrapper[4680]: I0217 18:03:22.703626 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9458w" event={"ID":"f869168f-8063-46a6-b574-d8be569ac733","Type":"ContainerDied","Data":"402a0262b690fade24798da881d746d742311d70295f6b4d9909081f63083658"} Feb 17 18:03:22 crc kubenswrapper[4680]: I0217 18:03:22.710920 4680 generic.go:334] "Generic (PLEG): container finished" podID="6e2a20ce-72be-4602-99f0-c1fb77919798" containerID="00719fd5adef0290617f1485cfa427f4dfba8d9b3f5d0851e127076421220cf7" exitCode=137 Feb 17 18:03:22 crc kubenswrapper[4680]: I0217 18:03:22.710950 4680 generic.go:334] "Generic (PLEG): container finished" podID="6e2a20ce-72be-4602-99f0-c1fb77919798" containerID="3475ac570d5fd7e9f2211e2e16bec30c65494169117464102e52c577e380646f" exitCode=137 Feb 17 18:03:22 crc kubenswrapper[4680]: I0217 18:03:22.710992 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c489b55b9-9nhw5" event={"ID":"6e2a20ce-72be-4602-99f0-c1fb77919798","Type":"ContainerDied","Data":"00719fd5adef0290617f1485cfa427f4dfba8d9b3f5d0851e127076421220cf7"} Feb 17 18:03:22 crc kubenswrapper[4680]: I0217 18:03:22.711019 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c489b55b9-9nhw5" event={"ID":"6e2a20ce-72be-4602-99f0-c1fb77919798","Type":"ContainerDied","Data":"3475ac570d5fd7e9f2211e2e16bec30c65494169117464102e52c577e380646f"} Feb 17 18:03:22 crc kubenswrapper[4680]: I0217 18:03:22.730117 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dc475ddf4-vbkll" event={"ID":"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf","Type":"ContainerStarted","Data":"4cad472be4a4f9f5ed542849b0186e418d564b42e51bcb88b7772d4dba6430b4"} Feb 17 18:03:23 crc kubenswrapper[4680]: I0217 18:03:23.133228 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="367c8ec3-d2e9-457a-b057-179da3568946" path="/var/lib/kubelet/pods/367c8ec3-d2e9-457a-b057-179da3568946/volumes" Feb 17 18:03:23 crc kubenswrapper[4680]: I0217 18:03:23.795841 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dc475ddf4-vbkll" event={"ID":"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf","Type":"ContainerStarted","Data":"a918bb4e9ab74ed2af9d4b6051028621aec1fb0fc18885fe1730947d240ae7f0"} Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.399626 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.526675 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9458w" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.559865 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e2a20ce-72be-4602-99f0-c1fb77919798-logs\") pod \"6e2a20ce-72be-4602-99f0-c1fb77919798\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.559933 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc4jq\" (UniqueName: \"kubernetes.io/projected/6e2a20ce-72be-4602-99f0-c1fb77919798-kube-api-access-zc4jq\") pod \"6e2a20ce-72be-4602-99f0-c1fb77919798\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.560226 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e2a20ce-72be-4602-99f0-c1fb77919798-config-data\") pod \"6e2a20ce-72be-4602-99f0-c1fb77919798\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.560289 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6e2a20ce-72be-4602-99f0-c1fb77919798-horizon-secret-key\") pod \"6e2a20ce-72be-4602-99f0-c1fb77919798\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.560371 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e2a20ce-72be-4602-99f0-c1fb77919798-scripts\") pod \"6e2a20ce-72be-4602-99f0-c1fb77919798\" (UID: \"6e2a20ce-72be-4602-99f0-c1fb77919798\") " Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.562333 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e2a20ce-72be-4602-99f0-c1fb77919798-logs" (OuterVolumeSpecName: "logs") pod "6e2a20ce-72be-4602-99f0-c1fb77919798" (UID: "6e2a20ce-72be-4602-99f0-c1fb77919798"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.569022 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e2a20ce-72be-4602-99f0-c1fb77919798-kube-api-access-zc4jq" (OuterVolumeSpecName: "kube-api-access-zc4jq") pod "6e2a20ce-72be-4602-99f0-c1fb77919798" (UID: "6e2a20ce-72be-4602-99f0-c1fb77919798"). InnerVolumeSpecName "kube-api-access-zc4jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.579699 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e2a20ce-72be-4602-99f0-c1fb77919798-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6e2a20ce-72be-4602-99f0-c1fb77919798" (UID: "6e2a20ce-72be-4602-99f0-c1fb77919798"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.606820 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e2a20ce-72be-4602-99f0-c1fb77919798-config-data" (OuterVolumeSpecName: "config-data") pod "6e2a20ce-72be-4602-99f0-c1fb77919798" (UID: "6e2a20ce-72be-4602-99f0-c1fb77919798"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.637363 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e2a20ce-72be-4602-99f0-c1fb77919798-scripts" (OuterVolumeSpecName: "scripts") pod "6e2a20ce-72be-4602-99f0-c1fb77919798" (UID: "6e2a20ce-72be-4602-99f0-c1fb77919798"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.671649 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-combined-ca-bundle\") pod \"f869168f-8063-46a6-b574-d8be569ac733\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.671744 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-db-sync-config-data\") pod \"f869168f-8063-46a6-b574-d8be569ac733\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.671855 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f869168f-8063-46a6-b574-d8be569ac733-etc-machine-id\") pod \"f869168f-8063-46a6-b574-d8be569ac733\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.672059 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f869168f-8063-46a6-b574-d8be569ac733-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f869168f-8063-46a6-b574-d8be569ac733" (UID: "f869168f-8063-46a6-b574-d8be569ac733"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.672142 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-scripts\") pod \"f869168f-8063-46a6-b574-d8be569ac733\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.672679 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-config-data\") pod \"f869168f-8063-46a6-b574-d8be569ac733\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.672979 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtxmg\" (UniqueName: \"kubernetes.io/projected/f869168f-8063-46a6-b574-d8be569ac733-kube-api-access-jtxmg\") pod \"f869168f-8063-46a6-b574-d8be569ac733\" (UID: \"f869168f-8063-46a6-b574-d8be569ac733\") " Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.673837 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e2a20ce-72be-4602-99f0-c1fb77919798-logs\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.673857 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc4jq\" (UniqueName: \"kubernetes.io/projected/6e2a20ce-72be-4602-99f0-c1fb77919798-kube-api-access-zc4jq\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.673869 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e2a20ce-72be-4602-99f0-c1fb77919798-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.673899 4680 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6e2a20ce-72be-4602-99f0-c1fb77919798-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.673911 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e2a20ce-72be-4602-99f0-c1fb77919798-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.673921 4680 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f869168f-8063-46a6-b574-d8be569ac733-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.679557 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f869168f-8063-46a6-b574-d8be569ac733" (UID: "f869168f-8063-46a6-b574-d8be569ac733"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.684637 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f869168f-8063-46a6-b574-d8be569ac733-kube-api-access-jtxmg" (OuterVolumeSpecName: "kube-api-access-jtxmg") pod "f869168f-8063-46a6-b574-d8be569ac733" (UID: "f869168f-8063-46a6-b574-d8be569ac733"). InnerVolumeSpecName "kube-api-access-jtxmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.686874 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-scripts" (OuterVolumeSpecName: "scripts") pod "f869168f-8063-46a6-b574-d8be569ac733" (UID: "f869168f-8063-46a6-b574-d8be569ac733"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.754289 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f869168f-8063-46a6-b574-d8be569ac733" (UID: "f869168f-8063-46a6-b574-d8be569ac733"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.776386 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.776415 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtxmg\" (UniqueName: \"kubernetes.io/projected/f869168f-8063-46a6-b574-d8be569ac733-kube-api-access-jtxmg\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.776425 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.776433 4680 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.806132 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-config-data" (OuterVolumeSpecName: "config-data") pod "f869168f-8063-46a6-b574-d8be569ac733" (UID: "f869168f-8063-46a6-b574-d8be569ac733"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.810401 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c489b55b9-9nhw5" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.810425 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c489b55b9-9nhw5" event={"ID":"6e2a20ce-72be-4602-99f0-c1fb77919798","Type":"ContainerDied","Data":"f1761eeb7470036380b55cd7796664791a82b02486a0cb9530d05c0a61690611"} Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.810480 4680 scope.go:117] "RemoveContainer" containerID="00719fd5adef0290617f1485cfa427f4dfba8d9b3f5d0851e127076421220cf7" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.833280 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dc475ddf4-vbkll" event={"ID":"a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf","Type":"ContainerStarted","Data":"4cae3a6db5d08f64c6271451f89a8537adad4af7f8271c69460b5f88117f77b3"} Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.833635 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.834304 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.868085 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c489b55b9-9nhw5"] Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.876687 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b7bf9567f-k8kfh" event={"ID":"6ccb1c12-6633-49c0-879c-5eaa29017991","Type":"ContainerStarted","Data":"ea4035e9e9b906b0ee99531d8325b808d1046ee1088f043704e85d1fb03a489b"} Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.876739 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b7bf9567f-k8kfh" event={"ID":"6ccb1c12-6633-49c0-879c-5eaa29017991","Type":"ContainerStarted","Data":"7e8b7b2d8b5fcb5360c34ee985bd02e8511eb6a24f4b689a87d0cb873b26c9fd"} Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.878473 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f869168f-8063-46a6-b574-d8be569ac733-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.892406 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" event={"ID":"27ae5442-486e-406a-ba33-bfcbe792792f","Type":"ContainerStarted","Data":"80aaf5f5e69de9b43b8b689e433a6935dd3357fbd925ac1a3e1dcec906d25705"} Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.892457 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" event={"ID":"27ae5442-486e-406a-ba33-bfcbe792792f","Type":"ContainerStarted","Data":"f7df8d073465a630fd4aded265cf4cdf6b379f699aa6384f098daf02f1f33328"} Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.899772 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9458w" event={"ID":"f869168f-8063-46a6-b574-d8be569ac733","Type":"ContainerDied","Data":"8e7e5abb95fbbc5de30b539e57d3f09b27baf2e3d4225f72dbe3841faa7ed277"} Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.899804 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e7e5abb95fbbc5de30b539e57d3f09b27baf2e3d4225f72dbe3841faa7ed277" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.899872 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9458w" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.908021 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-c489b55b9-9nhw5"] Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.909710 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6dc475ddf4-vbkll" podStartSLOduration=4.909693548 podStartE2EDuration="4.909693548s" podCreationTimestamp="2026-02-17 18:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:03:24.889905757 +0000 UTC m=+1134.440804456" watchObservedRunningTime="2026-02-17 18:03:24.909693548 +0000 UTC m=+1134.460592227" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.932469 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6b7bf9567f-k8kfh" podStartSLOduration=9.027616413 podStartE2EDuration="12.932451203s" podCreationTimestamp="2026-02-17 18:03:12 +0000 UTC" firstStartedPulling="2026-02-17 18:03:19.936875946 +0000 UTC m=+1129.487774725" lastFinishedPulling="2026-02-17 18:03:23.841710836 +0000 UTC m=+1133.392609515" observedRunningTime="2026-02-17 18:03:24.915774659 +0000 UTC m=+1134.466673338" watchObservedRunningTime="2026-02-17 18:03:24.932451203 +0000 UTC m=+1134.483349882" Feb 17 18:03:24 crc kubenswrapper[4680]: I0217 18:03:24.949555 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-8474964c54-bvtdt" podStartSLOduration=7.38283193 podStartE2EDuration="11.949533409s" podCreationTimestamp="2026-02-17 18:03:13 +0000 UTC" firstStartedPulling="2026-02-17 18:03:19.278772785 +0000 UTC m=+1128.829671464" lastFinishedPulling="2026-02-17 18:03:23.845474264 +0000 UTC m=+1133.396372943" observedRunningTime="2026-02-17 18:03:24.946867245 +0000 UTC m=+1134.497765934" watchObservedRunningTime="2026-02-17 18:03:24.949533409 +0000 UTC m=+1134.500432088" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.068100 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 18:03:25 crc kubenswrapper[4680]: E0217 18:03:25.069356 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2a20ce-72be-4602-99f0-c1fb77919798" containerName="horizon-log" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.069439 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2a20ce-72be-4602-99f0-c1fb77919798" containerName="horizon-log" Feb 17 18:03:25 crc kubenswrapper[4680]: E0217 18:03:25.069498 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="367c8ec3-d2e9-457a-b057-179da3568946" containerName="init" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.069548 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="367c8ec3-d2e9-457a-b057-179da3568946" containerName="init" Feb 17 18:03:25 crc kubenswrapper[4680]: E0217 18:03:25.069623 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2a20ce-72be-4602-99f0-c1fb77919798" containerName="horizon" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.069688 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2a20ce-72be-4602-99f0-c1fb77919798" containerName="horizon" Feb 17 18:03:25 crc kubenswrapper[4680]: E0217 18:03:25.069754 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f869168f-8063-46a6-b574-d8be569ac733" containerName="cinder-db-sync" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.069803 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f869168f-8063-46a6-b574-d8be569ac733" containerName="cinder-db-sync" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.074966 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e2a20ce-72be-4602-99f0-c1fb77919798" containerName="horizon-log" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.075423 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f869168f-8063-46a6-b574-d8be569ac733" containerName="cinder-db-sync" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.075545 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="367c8ec3-d2e9-457a-b057-179da3568946" containerName="init" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.075638 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e2a20ce-72be-4602-99f0-c1fb77919798" containerName="horizon" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.076760 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.082360 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.082426 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f11fb00-7f13-4757-9c5c-6261d32198a5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.082489 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-config-data\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.082526 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.082558 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pzc2g" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.082577 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.082610 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz5d2\" (UniqueName: \"kubernetes.io/projected/3f11fb00-7f13-4757-9c5c-6261d32198a5-kube-api-access-bz5d2\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.082663 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.082667 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-scripts\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.082800 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.126176 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e2a20ce-72be-4602-99f0-c1fb77919798" path="/var/lib/kubelet/pods/6e2a20ce-72be-4602-99f0-c1fb77919798/volumes" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.134064 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.199550 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz5d2\" (UniqueName: \"kubernetes.io/projected/3f11fb00-7f13-4757-9c5c-6261d32198a5-kube-api-access-bz5d2\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.199651 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-scripts\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.199741 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f11fb00-7f13-4757-9c5c-6261d32198a5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.199784 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-config-data\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.199839 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.199899 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.200453 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f11fb00-7f13-4757-9c5c-6261d32198a5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.209600 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.213433 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-scripts\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.217618 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-config-data\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.218156 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.271921 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz5d2\" (UniqueName: \"kubernetes.io/projected/3f11fb00-7f13-4757-9c5c-6261d32198a5-kube-api-access-bz5d2\") pod \"cinder-scheduler-0\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.272920 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-kq6wj"] Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.273116 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" podUID="6807c66c-2b59-4822-a455-7eda561b742f" containerName="dnsmasq-dns" containerID="cri-o://35382ab69449b63b061b83f6ddd73aeff199ca205c5953215ae0dec0eb6f6f0f" gracePeriod=10 Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.338823 4680 scope.go:117] "RemoveContainer" containerID="3475ac570d5fd7e9f2211e2e16bec30c65494169117464102e52c577e380646f" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.404166 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bdq9p"] Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.407577 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.409943 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-config\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.410015 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.410038 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.410058 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.410113 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsvbt\" (UniqueName: \"kubernetes.io/projected/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-kube-api-access-tsvbt\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.410144 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.421678 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bdq9p"] Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.446421 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.507309 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.517203 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.517368 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsvbt\" (UniqueName: \"kubernetes.io/projected/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-kube-api-access-tsvbt\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.517417 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.517532 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-config\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.517624 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.517646 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.518583 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.519096 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.519829 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.528495 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.531038 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.531793 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-config\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.532634 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.563633 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsvbt\" (UniqueName: \"kubernetes.io/projected/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-kube-api-access-tsvbt\") pod \"dnsmasq-dns-5c9776ccc5-bdq9p\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.587177 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.587814 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.727940 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-config-data-custom\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.730675 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-config-data\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.730750 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e60fee03-af47-4ec5-b656-06b344e31568-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.730787 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqxct\" (UniqueName: \"kubernetes.io/projected/e60fee03-af47-4ec5-b656-06b344e31568-kube-api-access-cqxct\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.731127 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e60fee03-af47-4ec5-b656-06b344e31568-logs\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.731286 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.731437 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-scripts\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.833263 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-scripts\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.833425 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-config-data-custom\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.833452 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-config-data\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.833485 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e60fee03-af47-4ec5-b656-06b344e31568-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.833517 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqxct\" (UniqueName: \"kubernetes.io/projected/e60fee03-af47-4ec5-b656-06b344e31568-kube-api-access-cqxct\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.833674 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e60fee03-af47-4ec5-b656-06b344e31568-logs\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.833785 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.844244 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.844483 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e60fee03-af47-4ec5-b656-06b344e31568-logs\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.844537 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e60fee03-af47-4ec5-b656-06b344e31568-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.852873 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-config-data-custom\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.854378 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-scripts\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.860135 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-config-data\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.864919 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqxct\" (UniqueName: \"kubernetes.io/projected/e60fee03-af47-4ec5-b656-06b344e31568-kube-api-access-cqxct\") pod \"cinder-api-0\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " pod="openstack/cinder-api-0" Feb 17 18:03:25 crc kubenswrapper[4680]: I0217 18:03:25.898241 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.052422 4680 generic.go:334] "Generic (PLEG): container finished" podID="6807c66c-2b59-4822-a455-7eda561b742f" containerID="35382ab69449b63b061b83f6ddd73aeff199ca205c5953215ae0dec0eb6f6f0f" exitCode=0 Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.053206 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" event={"ID":"6807c66c-2b59-4822-a455-7eda561b742f","Type":"ContainerDied","Data":"35382ab69449b63b061b83f6ddd73aeff199ca205c5953215ae0dec0eb6f6f0f"} Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.140670 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.153301 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-dns-svc\") pod \"6807c66c-2b59-4822-a455-7eda561b742f\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.153470 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-config\") pod \"6807c66c-2b59-4822-a455-7eda561b742f\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.153522 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxnp7\" (UniqueName: \"kubernetes.io/projected/6807c66c-2b59-4822-a455-7eda561b742f-kube-api-access-qxnp7\") pod \"6807c66c-2b59-4822-a455-7eda561b742f\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.153628 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-dns-swift-storage-0\") pod \"6807c66c-2b59-4822-a455-7eda561b742f\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.153732 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-ovsdbserver-sb\") pod \"6807c66c-2b59-4822-a455-7eda561b742f\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.153808 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-ovsdbserver-nb\") pod \"6807c66c-2b59-4822-a455-7eda561b742f\" (UID: \"6807c66c-2b59-4822-a455-7eda561b742f\") " Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.183750 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6807c66c-2b59-4822-a455-7eda561b742f-kube-api-access-qxnp7" (OuterVolumeSpecName: "kube-api-access-qxnp7") pod "6807c66c-2b59-4822-a455-7eda561b742f" (UID: "6807c66c-2b59-4822-a455-7eda561b742f"). InnerVolumeSpecName "kube-api-access-qxnp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.258701 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxnp7\" (UniqueName: \"kubernetes.io/projected/6807c66c-2b59-4822-a455-7eda561b742f-kube-api-access-qxnp7\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.356673 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6807c66c-2b59-4822-a455-7eda561b742f" (UID: "6807c66c-2b59-4822-a455-7eda561b742f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.360553 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.383594 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6807c66c-2b59-4822-a455-7eda561b742f" (UID: "6807c66c-2b59-4822-a455-7eda561b742f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.407068 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6807c66c-2b59-4822-a455-7eda561b742f" (UID: "6807c66c-2b59-4822-a455-7eda561b742f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.453542 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-config" (OuterVolumeSpecName: "config") pod "6807c66c-2b59-4822-a455-7eda561b742f" (UID: "6807c66c-2b59-4822-a455-7eda561b742f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.499711 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.499745 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.499755 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.500526 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.521459 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6807c66c-2b59-4822-a455-7eda561b742f" (UID: "6807c66c-2b59-4822-a455-7eda561b742f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.603564 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6807c66c-2b59-4822-a455-7eda561b742f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.675095 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bdq9p"] Feb 17 18:03:26 crc kubenswrapper[4680]: I0217 18:03:26.903066 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 18:03:26 crc kubenswrapper[4680]: W0217 18:03:26.920588 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode60fee03_af47_4ec5_b656_06b344e31568.slice/crio-fef3cc5b737d3085bdb9cfb41d0177fe49939a3183b1949ff91bd664f9996e69 WatchSource:0}: Error finding container fef3cc5b737d3085bdb9cfb41d0177fe49939a3183b1949ff91bd664f9996e69: Status 404 returned error can't find the container with id fef3cc5b737d3085bdb9cfb41d0177fe49939a3183b1949ff91bd664f9996e69 Feb 17 18:03:27 crc kubenswrapper[4680]: I0217 18:03:27.071719 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" event={"ID":"6807c66c-2b59-4822-a455-7eda561b742f","Type":"ContainerDied","Data":"4e9153c625b76c779fb6f60f9c034da73c825b6f1f49adeec6ce084b8bab7379"} Feb 17 18:03:27 crc kubenswrapper[4680]: I0217 18:03:27.071752 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-kq6wj" Feb 17 18:03:27 crc kubenswrapper[4680]: I0217 18:03:27.071775 4680 scope.go:117] "RemoveContainer" containerID="35382ab69449b63b061b83f6ddd73aeff199ca205c5953215ae0dec0eb6f6f0f" Feb 17 18:03:27 crc kubenswrapper[4680]: I0217 18:03:27.078649 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" event={"ID":"2625239a-cd47-4fd7-9ad2-f87fb89bfc40","Type":"ContainerStarted","Data":"4406fedb6452f19643ee69e390092b99be5f48b4f8518d5e091a003b4ea00e6f"} Feb 17 18:03:27 crc kubenswrapper[4680]: I0217 18:03:27.078687 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" event={"ID":"2625239a-cd47-4fd7-9ad2-f87fb89bfc40","Type":"ContainerStarted","Data":"a5991b40623aa0990b0fbdcfbd1b1be56ea1ca992bfa9decaac86707345c7b5c"} Feb 17 18:03:27 crc kubenswrapper[4680]: I0217 18:03:27.084617 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e60fee03-af47-4ec5-b656-06b344e31568","Type":"ContainerStarted","Data":"fef3cc5b737d3085bdb9cfb41d0177fe49939a3183b1949ff91bd664f9996e69"} Feb 17 18:03:27 crc kubenswrapper[4680]: I0217 18:03:27.175621 4680 scope.go:117] "RemoveContainer" containerID="d5ab37b1712df0adb56cb8965c8e3ed11e6b9e305abca4cb62b47957fd7bc517" Feb 17 18:03:27 crc kubenswrapper[4680]: I0217 18:03:27.192206 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f11fb00-7f13-4757-9c5c-6261d32198a5","Type":"ContainerStarted","Data":"72342d6befb03a9dc7f69140e4cf7e4a730116e54dcfe0e4cd765094e1c24088"} Feb 17 18:03:27 crc kubenswrapper[4680]: I0217 18:03:27.295963 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-kq6wj"] Feb 17 18:03:27 crc kubenswrapper[4680]: I0217 18:03:27.305225 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-kq6wj"] Feb 17 18:03:27 crc kubenswrapper[4680]: I0217 18:03:27.976184 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 18:03:28 crc kubenswrapper[4680]: I0217 18:03:28.158892 4680 generic.go:334] "Generic (PLEG): container finished" podID="2625239a-cd47-4fd7-9ad2-f87fb89bfc40" containerID="4406fedb6452f19643ee69e390092b99be5f48b4f8518d5e091a003b4ea00e6f" exitCode=0 Feb 17 18:03:28 crc kubenswrapper[4680]: I0217 18:03:28.159177 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" event={"ID":"2625239a-cd47-4fd7-9ad2-f87fb89bfc40","Type":"ContainerDied","Data":"4406fedb6452f19643ee69e390092b99be5f48b4f8518d5e091a003b4ea00e6f"} Feb 17 18:03:29 crc kubenswrapper[4680]: I0217 18:03:29.120669 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6807c66c-2b59-4822-a455-7eda561b742f" path="/var/lib/kubelet/pods/6807c66c-2b59-4822-a455-7eda561b742f/volumes" Feb 17 18:03:29 crc kubenswrapper[4680]: I0217 18:03:29.217575 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8ccfcccc8-2qwb2" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 17 18:03:29 crc kubenswrapper[4680]: I0217 18:03:29.217647 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:03:29 crc kubenswrapper[4680]: I0217 18:03:29.218329 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"8b01e5dcc6cc0527ee746e19a1edb564cad531d2e71631a44d11071138f3ae62"} pod="openstack/horizon-8ccfcccc8-2qwb2" containerMessage="Container horizon failed startup probe, will be restarted" Feb 17 18:03:29 crc kubenswrapper[4680]: I0217 18:03:29.218362 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8ccfcccc8-2qwb2" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon" containerID="cri-o://8b01e5dcc6cc0527ee746e19a1edb564cad531d2e71631a44d11071138f3ae62" gracePeriod=30 Feb 17 18:03:29 crc kubenswrapper[4680]: I0217 18:03:29.248695 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" event={"ID":"2625239a-cd47-4fd7-9ad2-f87fb89bfc40","Type":"ContainerStarted","Data":"b4181bbedd1ad100016664e6d6703d38f7b267ebf6ff27fea8eeb16d34c479e1"} Feb 17 18:03:29 crc kubenswrapper[4680]: I0217 18:03:29.250013 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:29 crc kubenswrapper[4680]: I0217 18:03:29.261794 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e60fee03-af47-4ec5-b656-06b344e31568","Type":"ContainerStarted","Data":"b8b3f76aa8ab3fe2957a31ebc341d75d60a5b6dbf611bf579ae796a779c010f0"} Feb 17 18:03:29 crc kubenswrapper[4680]: I0217 18:03:29.283489 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f11fb00-7f13-4757-9c5c-6261d32198a5","Type":"ContainerStarted","Data":"e990cc9ce1632560b33ecd5097fcd50919af8af5d9eed2978790c71d8ef68ee9"} Feb 17 18:03:29 crc kubenswrapper[4680]: I0217 18:03:29.416263 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6978d4c7cb-4k7vb" podUID="91980b55-1251-4404-a22d-1572dd91ec8a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 17 18:03:29 crc kubenswrapper[4680]: I0217 18:03:29.416367 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:03:29 crc kubenswrapper[4680]: I0217 18:03:29.417305 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"f3a8f4a5fdca81edfa583020c0921b198bd2adb68d7e11153d1ee645737ca8c6"} pod="openstack/horizon-6978d4c7cb-4k7vb" containerMessage="Container horizon failed startup probe, will be restarted" Feb 17 18:03:29 crc kubenswrapper[4680]: I0217 18:03:29.417371 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6978d4c7cb-4k7vb" podUID="91980b55-1251-4404-a22d-1572dd91ec8a" containerName="horizon" containerID="cri-o://f3a8f4a5fdca81edfa583020c0921b198bd2adb68d7e11153d1ee645737ca8c6" gracePeriod=30 Feb 17 18:03:30 crc kubenswrapper[4680]: I0217 18:03:30.313358 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e60fee03-af47-4ec5-b656-06b344e31568","Type":"ContainerStarted","Data":"3a3f7defff16075c89f4efbb4a4fb2976304e553a307c5b2b5e06a44242ffe5b"} Feb 17 18:03:30 crc kubenswrapper[4680]: I0217 18:03:30.313732 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e60fee03-af47-4ec5-b656-06b344e31568" containerName="cinder-api-log" containerID="cri-o://b8b3f76aa8ab3fe2957a31ebc341d75d60a5b6dbf611bf579ae796a779c010f0" gracePeriod=30 Feb 17 18:03:30 crc kubenswrapper[4680]: I0217 18:03:30.314035 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 18:03:30 crc kubenswrapper[4680]: I0217 18:03:30.314081 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e60fee03-af47-4ec5-b656-06b344e31568" containerName="cinder-api" containerID="cri-o://3a3f7defff16075c89f4efbb4a4fb2976304e553a307c5b2b5e06a44242ffe5b" gracePeriod=30 Feb 17 18:03:30 crc kubenswrapper[4680]: I0217 18:03:30.326076 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f11fb00-7f13-4757-9c5c-6261d32198a5","Type":"ContainerStarted","Data":"e97ceb3a545bd343c539431e7fb797c52d93980694d292d8edfc8a71e6d34097"} Feb 17 18:03:30 crc kubenswrapper[4680]: I0217 18:03:30.339773 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.339748809 podStartE2EDuration="5.339748809s" podCreationTimestamp="2026-02-17 18:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:03:30.33689121 +0000 UTC m=+1139.887789889" watchObservedRunningTime="2026-02-17 18:03:30.339748809 +0000 UTC m=+1139.890647488" Feb 17 18:03:30 crc kubenswrapper[4680]: I0217 18:03:30.341357 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" podStartSLOduration=5.341351369 podStartE2EDuration="5.341351369s" podCreationTimestamp="2026-02-17 18:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:03:29.289934486 +0000 UTC m=+1138.840833165" watchObservedRunningTime="2026-02-17 18:03:30.341351369 +0000 UTC m=+1139.892250048" Feb 17 18:03:30 crc kubenswrapper[4680]: I0217 18:03:30.376614 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.290944799 podStartE2EDuration="5.376594045s" podCreationTimestamp="2026-02-17 18:03:25 +0000 UTC" firstStartedPulling="2026-02-17 18:03:26.524611893 +0000 UTC m=+1136.075510572" lastFinishedPulling="2026-02-17 18:03:27.610261139 +0000 UTC m=+1137.161159818" observedRunningTime="2026-02-17 18:03:30.375959056 +0000 UTC m=+1139.926857735" watchObservedRunningTime="2026-02-17 18:03:30.376594045 +0000 UTC m=+1139.927492724" Feb 17 18:03:30 crc kubenswrapper[4680]: I0217 18:03:30.462543 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 18:03:30 crc kubenswrapper[4680]: I0217 18:03:30.990610 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-786b6cd966-svbnn" podUID="7f41da1e-3f88-41c7-ab31-f3efdd7790c9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 18:03:31 crc kubenswrapper[4680]: I0217 18:03:31.348921 4680 generic.go:334] "Generic (PLEG): container finished" podID="e60fee03-af47-4ec5-b656-06b344e31568" containerID="b8b3f76aa8ab3fe2957a31ebc341d75d60a5b6dbf611bf579ae796a779c010f0" exitCode=143 Feb 17 18:03:31 crc kubenswrapper[4680]: I0217 18:03:31.349212 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e60fee03-af47-4ec5-b656-06b344e31568","Type":"ContainerDied","Data":"b8b3f76aa8ab3fe2957a31ebc341d75d60a5b6dbf611bf579ae796a779c010f0"} Feb 17 18:03:32 crc kubenswrapper[4680]: I0217 18:03:32.591937 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:34 crc kubenswrapper[4680]: I0217 18:03:34.014172 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-786b6cd966-svbnn" podUID="7f41da1e-3f88-41c7-ab31-f3efdd7790c9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 18:03:34 crc kubenswrapper[4680]: I0217 18:03:34.662461 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:35 crc kubenswrapper[4680]: I0217 18:03:35.094563 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6dc475ddf4-vbkll" podUID="a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 18:03:35 crc kubenswrapper[4680]: I0217 18:03:35.594447 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:03:35 crc kubenswrapper[4680]: I0217 18:03:35.676114 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gvq9l"] Feb 17 18:03:35 crc kubenswrapper[4680]: I0217 18:03:35.676414 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" podUID="cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b" containerName="dnsmasq-dns" containerID="cri-o://9f4f97d0967d5e214dbb2c0f6bd53d008f02270da4a5a54447f2f6e7d0c9eec6" gracePeriod=10 Feb 17 18:03:36 crc kubenswrapper[4680]: I0217 18:03:36.210449 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dc475ddf4-vbkll" podUID="a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 18:03:36 crc kubenswrapper[4680]: I0217 18:03:36.427581 4680 generic.go:334] "Generic (PLEG): container finished" podID="cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b" containerID="9f4f97d0967d5e214dbb2c0f6bd53d008f02270da4a5a54447f2f6e7d0c9eec6" exitCode=0 Feb 17 18:03:36 crc kubenswrapper[4680]: I0217 18:03:36.427812 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" event={"ID":"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b","Type":"ContainerDied","Data":"9f4f97d0967d5e214dbb2c0f6bd53d008f02270da4a5a54447f2f6e7d0c9eec6"} Feb 17 18:03:36 crc kubenswrapper[4680]: I0217 18:03:36.603679 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="3f11fb00-7f13-4757-9c5c-6261d32198a5" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 18:03:38 crc kubenswrapper[4680]: I0217 18:03:38.057912 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:38 crc kubenswrapper[4680]: I0217 18:03:38.087588 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6dc475ddf4-vbkll" podUID="a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 18:03:39 crc kubenswrapper[4680]: I0217 18:03:39.014768 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-786b6cd966-svbnn" podUID="7f41da1e-3f88-41c7-ab31-f3efdd7790c9" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 18:03:39 crc kubenswrapper[4680]: I0217 18:03:39.147862 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" podUID="cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Feb 17 18:03:40 crc kubenswrapper[4680]: I0217 18:03:40.100523 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6dc475ddf4-vbkll" podUID="a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 18:03:40 crc kubenswrapper[4680]: I0217 18:03:40.205190 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-854d5d774d-kbx8n" Feb 17 18:03:40 crc kubenswrapper[4680]: I0217 18:03:40.489814 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 18:03:40 crc kubenswrapper[4680]: I0217 18:03:40.548087 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 18:03:40 crc kubenswrapper[4680]: I0217 18:03:40.645828 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3f11fb00-7f13-4757-9c5c-6261d32198a5" containerName="cinder-scheduler" containerID="cri-o://e990cc9ce1632560b33ecd5097fcd50919af8af5d9eed2978790c71d8ef68ee9" gracePeriod=30 Feb 17 18:03:40 crc kubenswrapper[4680]: I0217 18:03:40.646548 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3f11fb00-7f13-4757-9c5c-6261d32198a5" containerName="probe" containerID="cri-o://e97ceb3a545bd343c539431e7fb797c52d93980694d292d8edfc8a71e6d34097" gracePeriod=30 Feb 17 18:03:40 crc kubenswrapper[4680]: I0217 18:03:40.857180 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6dc475ddf4-vbkll" Feb 17 18:03:40 crc kubenswrapper[4680]: I0217 18:03:40.934501 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-786b6cd966-svbnn"] Feb 17 18:03:40 crc kubenswrapper[4680]: I0217 18:03:40.934789 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-786b6cd966-svbnn" podUID="7f41da1e-3f88-41c7-ab31-f3efdd7790c9" containerName="barbican-api-log" containerID="cri-o://282b0c8b16dbe8c26600cef43b182b2463c51ac9c06f546093896fddfaf64a8e" gracePeriod=30 Feb 17 18:03:40 crc kubenswrapper[4680]: I0217 18:03:40.934871 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-786b6cd966-svbnn" podUID="7f41da1e-3f88-41c7-ab31-f3efdd7790c9" containerName="barbican-api" containerID="cri-o://2b4f12123c5b7cc2505e3fb697f2641ea1b4fdbcffa172b0a519c570e915c859" gracePeriod=30 Feb 17 18:03:41 crc kubenswrapper[4680]: I0217 18:03:41.248479 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="e60fee03-af47-4ec5-b656-06b344e31568" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.165:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 18:03:41 crc kubenswrapper[4680]: I0217 18:03:41.672173 4680 generic.go:334] "Generic (PLEG): container finished" podID="7f41da1e-3f88-41c7-ab31-f3efdd7790c9" containerID="282b0c8b16dbe8c26600cef43b182b2463c51ac9c06f546093896fddfaf64a8e" exitCode=143 Feb 17 18:03:41 crc kubenswrapper[4680]: I0217 18:03:41.672224 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786b6cd966-svbnn" event={"ID":"7f41da1e-3f88-41c7-ab31-f3efdd7790c9","Type":"ContainerDied","Data":"282b0c8b16dbe8c26600cef43b182b2463c51ac9c06f546093896fddfaf64a8e"} Feb 17 18:03:42 crc kubenswrapper[4680]: I0217 18:03:42.684106 4680 generic.go:334] "Generic (PLEG): container finished" podID="3f11fb00-7f13-4757-9c5c-6261d32198a5" containerID="e97ceb3a545bd343c539431e7fb797c52d93980694d292d8edfc8a71e6d34097" exitCode=0 Feb 17 18:03:42 crc kubenswrapper[4680]: I0217 18:03:42.684134 4680 generic.go:334] "Generic (PLEG): container finished" podID="3f11fb00-7f13-4757-9c5c-6261d32198a5" containerID="e990cc9ce1632560b33ecd5097fcd50919af8af5d9eed2978790c71d8ef68ee9" exitCode=0 Feb 17 18:03:42 crc kubenswrapper[4680]: I0217 18:03:42.684157 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f11fb00-7f13-4757-9c5c-6261d32198a5","Type":"ContainerDied","Data":"e97ceb3a545bd343c539431e7fb797c52d93980694d292d8edfc8a71e6d34097"} Feb 17 18:03:42 crc kubenswrapper[4680]: I0217 18:03:42.684182 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f11fb00-7f13-4757-9c5c-6261d32198a5","Type":"ContainerDied","Data":"e990cc9ce1632560b33ecd5097fcd50919af8af5d9eed2978790c71d8ef68ee9"} Feb 17 18:03:43 crc kubenswrapper[4680]: I0217 18:03:43.193295 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.515158 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.557007 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-ovsdbserver-nb\") pod \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.557137 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-config\") pod \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.557198 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-dns-swift-storage-0\") pod \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.557236 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlmfp\" (UniqueName: \"kubernetes.io/projected/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-kube-api-access-wlmfp\") pod \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.557371 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-dns-svc\") pod \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.557421 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-ovsdbserver-sb\") pod \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\" (UID: \"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b\") " Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.612813 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-kube-api-access-wlmfp" (OuterVolumeSpecName: "kube-api-access-wlmfp") pod "cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b" (UID: "cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b"). InnerVolumeSpecName "kube-api-access-wlmfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.668235 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlmfp\" (UniqueName: \"kubernetes.io/projected/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-kube-api-access-wlmfp\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.675076 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b" (UID: "cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.718273 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-786b6cd966-svbnn" podUID="7f41da1e-3f88-41c7-ab31-f3efdd7790c9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:40264->10.217.0.160:9311: read: connection reset by peer" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.718308 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-786b6cd966-svbnn" podUID="7f41da1e-3f88-41c7-ab31-f3efdd7790c9" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:40280->10.217.0.160:9311: read: connection reset by peer" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.719021 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b" (UID: "cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.724011 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b" (UID: "cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.732857 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" event={"ID":"cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b","Type":"ContainerDied","Data":"b0a0271385ff4dbaf12e77bf10f6fc26d5c5d0f5ae435c0b5380572a0e6e3c2f"} Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.733070 4680 scope.go:117] "RemoveContainer" containerID="9f4f97d0967d5e214dbb2c0f6bd53d008f02270da4a5a54447f2f6e7d0c9eec6" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.733339 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.737802 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-config" (OuterVolumeSpecName: "config") pod "cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b" (UID: "cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.741805 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b" (UID: "cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.756715 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 17 18:03:44 crc kubenswrapper[4680]: E0217 18:03:44.757075 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6807c66c-2b59-4822-a455-7eda561b742f" containerName="dnsmasq-dns" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.757091 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6807c66c-2b59-4822-a455-7eda561b742f" containerName="dnsmasq-dns" Feb 17 18:03:44 crc kubenswrapper[4680]: E0217 18:03:44.757106 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b" containerName="init" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.757112 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b" containerName="init" Feb 17 18:03:44 crc kubenswrapper[4680]: E0217 18:03:44.757130 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b" containerName="dnsmasq-dns" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.757137 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b" containerName="dnsmasq-dns" Feb 17 18:03:44 crc kubenswrapper[4680]: E0217 18:03:44.757161 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6807c66c-2b59-4822-a455-7eda561b742f" containerName="init" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.757166 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6807c66c-2b59-4822-a455-7eda561b742f" containerName="init" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.763233 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="6807c66c-2b59-4822-a455-7eda561b742f" containerName="dnsmasq-dns" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.763283 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b" containerName="dnsmasq-dns" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.764109 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.766173 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-rldgw" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.766727 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.767032 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.770306 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.770353 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.770366 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.770375 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.770384 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.789254 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.811268 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.871843 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a0986f8d-da98-4f6e-bd4e-457e4f405022-openstack-config-secret\") pod \"openstackclient\" (UID: \"a0986f8d-da98-4f6e-bd4e-457e4f405022\") " pod="openstack/openstackclient" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.872137 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a0986f8d-da98-4f6e-bd4e-457e4f405022-openstack-config\") pod \"openstackclient\" (UID: \"a0986f8d-da98-4f6e-bd4e-457e4f405022\") " pod="openstack/openstackclient" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.872374 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgtgv\" (UniqueName: \"kubernetes.io/projected/a0986f8d-da98-4f6e-bd4e-457e4f405022-kube-api-access-pgtgv\") pod \"openstackclient\" (UID: \"a0986f8d-da98-4f6e-bd4e-457e4f405022\") " pod="openstack/openstackclient" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.872469 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0986f8d-da98-4f6e-bd4e-457e4f405022-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a0986f8d-da98-4f6e-bd4e-457e4f405022\") " pod="openstack/openstackclient" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.973875 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a0986f8d-da98-4f6e-bd4e-457e4f405022-openstack-config-secret\") pod \"openstackclient\" (UID: \"a0986f8d-da98-4f6e-bd4e-457e4f405022\") " pod="openstack/openstackclient" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.973967 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a0986f8d-da98-4f6e-bd4e-457e4f405022-openstack-config\") pod \"openstackclient\" (UID: \"a0986f8d-da98-4f6e-bd4e-457e4f405022\") " pod="openstack/openstackclient" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.974028 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgtgv\" (UniqueName: \"kubernetes.io/projected/a0986f8d-da98-4f6e-bd4e-457e4f405022-kube-api-access-pgtgv\") pod \"openstackclient\" (UID: \"a0986f8d-da98-4f6e-bd4e-457e4f405022\") " pod="openstack/openstackclient" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.974054 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0986f8d-da98-4f6e-bd4e-457e4f405022-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a0986f8d-da98-4f6e-bd4e-457e4f405022\") " pod="openstack/openstackclient" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.975013 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a0986f8d-da98-4f6e-bd4e-457e4f405022-openstack-config\") pod \"openstackclient\" (UID: \"a0986f8d-da98-4f6e-bd4e-457e4f405022\") " pod="openstack/openstackclient" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.979644 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a0986f8d-da98-4f6e-bd4e-457e4f405022-openstack-config-secret\") pod \"openstackclient\" (UID: \"a0986f8d-da98-4f6e-bd4e-457e4f405022\") " pod="openstack/openstackclient" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.982914 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0986f8d-da98-4f6e-bd4e-457e4f405022-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a0986f8d-da98-4f6e-bd4e-457e4f405022\") " pod="openstack/openstackclient" Feb 17 18:03:44 crc kubenswrapper[4680]: I0217 18:03:44.996133 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgtgv\" (UniqueName: \"kubernetes.io/projected/a0986f8d-da98-4f6e-bd4e-457e4f405022-kube-api-access-pgtgv\") pod \"openstackclient\" (UID: \"a0986f8d-da98-4f6e-bd4e-457e4f405022\") " pod="openstack/openstackclient" Feb 17 18:03:45 crc kubenswrapper[4680]: I0217 18:03:45.086259 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gvq9l"] Feb 17 18:03:45 crc kubenswrapper[4680]: I0217 18:03:45.103108 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gvq9l"] Feb 17 18:03:45 crc kubenswrapper[4680]: I0217 18:03:45.121232 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 18:03:45 crc kubenswrapper[4680]: I0217 18:03:45.130057 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b" path="/var/lib/kubelet/pods/cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b/volumes" Feb 17 18:03:45 crc kubenswrapper[4680]: I0217 18:03:45.704496 4680 scope.go:117] "RemoveContainer" containerID="0b53c527230ed8c8bf0428a4c6601b78aa8af438049cb5807e01ef617b369d2f" Feb 17 18:03:45 crc kubenswrapper[4680]: E0217 18:03:45.711122 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 17 18:03:45 crc kubenswrapper[4680]: E0217 18:03:45.711660 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bw45w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7f5f74b1-9667-421c-94e6-484208c36f55): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 18:03:45 crc kubenswrapper[4680]: E0217 18:03:45.713249 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="7f5f74b1-9667-421c-94e6-484208c36f55" Feb 17 18:03:45 crc kubenswrapper[4680]: I0217 18:03:45.753070 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f11fb00-7f13-4757-9c5c-6261d32198a5","Type":"ContainerDied","Data":"72342d6befb03a9dc7f69140e4cf7e4a730116e54dcfe0e4cd765094e1c24088"} Feb 17 18:03:45 crc kubenswrapper[4680]: I0217 18:03:45.753111 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72342d6befb03a9dc7f69140e4cf7e4a730116e54dcfe0e4cd765094e1c24088" Feb 17 18:03:45 crc kubenswrapper[4680]: I0217 18:03:45.755066 4680 generic.go:334] "Generic (PLEG): container finished" podID="7f41da1e-3f88-41c7-ab31-f3efdd7790c9" containerID="2b4f12123c5b7cc2505e3fb697f2641ea1b4fdbcffa172b0a519c570e915c859" exitCode=0 Feb 17 18:03:45 crc kubenswrapper[4680]: I0217 18:03:45.755132 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786b6cd966-svbnn" event={"ID":"7f41da1e-3f88-41c7-ab31-f3efdd7790c9","Type":"ContainerDied","Data":"2b4f12123c5b7cc2505e3fb697f2641ea1b4fdbcffa172b0a519c570e915c859"} Feb 17 18:03:45 crc kubenswrapper[4680]: I0217 18:03:45.756870 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f5f74b1-9667-421c-94e6-484208c36f55" containerName="ceilometer-notification-agent" containerID="cri-o://17d510f98f9ce9a63a955c0961a6b17e0c210d04faf62622027c8f851a32bbab" gracePeriod=30 Feb 17 18:03:45 crc kubenswrapper[4680]: I0217 18:03:45.756903 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f5f74b1-9667-421c-94e6-484208c36f55" containerName="sg-core" containerID="cri-o://1950124cbfede488894c7094f1924801f760a455c9bfec4648bbad9c4e82e9bd" gracePeriod=30 Feb 17 18:03:45 crc kubenswrapper[4680]: I0217 18:03:45.886686 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.006490 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-config-data-custom\") pod \"3f11fb00-7f13-4757-9c5c-6261d32198a5\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.007926 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f11fb00-7f13-4757-9c5c-6261d32198a5-etc-machine-id\") pod \"3f11fb00-7f13-4757-9c5c-6261d32198a5\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.008036 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz5d2\" (UniqueName: \"kubernetes.io/projected/3f11fb00-7f13-4757-9c5c-6261d32198a5-kube-api-access-bz5d2\") pod \"3f11fb00-7f13-4757-9c5c-6261d32198a5\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.008108 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-config-data\") pod \"3f11fb00-7f13-4757-9c5c-6261d32198a5\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.008152 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-scripts\") pod \"3f11fb00-7f13-4757-9c5c-6261d32198a5\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.008180 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-combined-ca-bundle\") pod \"3f11fb00-7f13-4757-9c5c-6261d32198a5\" (UID: \"3f11fb00-7f13-4757-9c5c-6261d32198a5\") " Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.009987 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f11fb00-7f13-4757-9c5c-6261d32198a5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3f11fb00-7f13-4757-9c5c-6261d32198a5" (UID: "3f11fb00-7f13-4757-9c5c-6261d32198a5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.016700 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-scripts" (OuterVolumeSpecName: "scripts") pod "3f11fb00-7f13-4757-9c5c-6261d32198a5" (UID: "3f11fb00-7f13-4757-9c5c-6261d32198a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.018481 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3f11fb00-7f13-4757-9c5c-6261d32198a5" (UID: "3f11fb00-7f13-4757-9c5c-6261d32198a5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.019145 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.019165 4680 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.019176 4680 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f11fb00-7f13-4757-9c5c-6261d32198a5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.026521 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f11fb00-7f13-4757-9c5c-6261d32198a5-kube-api-access-bz5d2" (OuterVolumeSpecName: "kube-api-access-bz5d2") pod "3f11fb00-7f13-4757-9c5c-6261d32198a5" (UID: "3f11fb00-7f13-4757-9c5c-6261d32198a5"). InnerVolumeSpecName "kube-api-access-bz5d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.110842 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.111763 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f11fb00-7f13-4757-9c5c-6261d32198a5" (UID: "3f11fb00-7f13-4757-9c5c-6261d32198a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.120774 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz5d2\" (UniqueName: \"kubernetes.io/projected/3f11fb00-7f13-4757-9c5c-6261d32198a5-kube-api-access-bz5d2\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.120809 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.167188 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-config-data" (OuterVolumeSpecName: "config-data") pod "3f11fb00-7f13-4757-9c5c-6261d32198a5" (UID: "3f11fb00-7f13-4757-9c5c-6261d32198a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.222252 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-config-data-custom\") pod \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.222436 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-config-data\") pod \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.222472 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-combined-ca-bundle\") pod \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.222598 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkdhh\" (UniqueName: \"kubernetes.io/projected/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-kube-api-access-kkdhh\") pod \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.222700 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-logs\") pod \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\" (UID: \"7f41da1e-3f88-41c7-ab31-f3efdd7790c9\") " Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.223302 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f11fb00-7f13-4757-9c5c-6261d32198a5-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.224964 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-logs" (OuterVolumeSpecName: "logs") pod "7f41da1e-3f88-41c7-ab31-f3efdd7790c9" (UID: "7f41da1e-3f88-41c7-ab31-f3efdd7790c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.228097 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7f41da1e-3f88-41c7-ab31-f3efdd7790c9" (UID: "7f41da1e-3f88-41c7-ab31-f3efdd7790c9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.233526 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-kube-api-access-kkdhh" (OuterVolumeSpecName: "kube-api-access-kkdhh") pod "7f41da1e-3f88-41c7-ab31-f3efdd7790c9" (UID: "7f41da1e-3f88-41c7-ab31-f3efdd7790c9"). InnerVolumeSpecName "kube-api-access-kkdhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.268328 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f41da1e-3f88-41c7-ab31-f3efdd7790c9" (UID: "7f41da1e-3f88-41c7-ab31-f3efdd7790c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.288663 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-config-data" (OuterVolumeSpecName: "config-data") pod "7f41da1e-3f88-41c7-ab31-f3efdd7790c9" (UID: "7f41da1e-3f88-41c7-ab31-f3efdd7790c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.293761 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-59fd4ccc67-hf8zf" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.325191 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.325233 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.325248 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkdhh\" (UniqueName: \"kubernetes.io/projected/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-kube-api-access-kkdhh\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.325262 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-logs\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.325275 4680 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f41da1e-3f88-41c7-ab31-f3efdd7790c9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.376235 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7cfbd9cd46-qsg4t"] Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.376549 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7cfbd9cd46-qsg4t" podUID="be0e9165-8f19-438f-a422-eb09778329be" containerName="neutron-api" containerID="cri-o://02b9ab7a9af5c8fa6cb72f82609c11288d9fa2dd66d2b2518e3e3be36cfabb87" gracePeriod=30 Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.376622 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7cfbd9cd46-qsg4t" podUID="be0e9165-8f19-438f-a422-eb09778329be" containerName="neutron-httpd" containerID="cri-o://98cfbb44b49ce52953de041f8e9706e8267bf17c6b93a04705aa6e2f5b08c269" gracePeriod=30 Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.428657 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.768776 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786b6cd966-svbnn" event={"ID":"7f41da1e-3f88-41c7-ab31-f3efdd7790c9","Type":"ContainerDied","Data":"b3c4ed2d7901ef4768b42fa658b3cd4497b1a922b0ab45260a8f66ec7f59c3fa"} Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.768833 4680 scope.go:117] "RemoveContainer" containerID="2b4f12123c5b7cc2505e3fb697f2641ea1b4fdbcffa172b0a519c570e915c859" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.768955 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-786b6cd966-svbnn" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.775593 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a0986f8d-da98-4f6e-bd4e-457e4f405022","Type":"ContainerStarted","Data":"fb2c1d26e7cd410ff9bfd06e5e842c2bfbf2314a4d4c1a7a429c55a0e5c9a7a4"} Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.785245 4680 generic.go:334] "Generic (PLEG): container finished" podID="7f5f74b1-9667-421c-94e6-484208c36f55" containerID="1950124cbfede488894c7094f1924801f760a455c9bfec4648bbad9c4e82e9bd" exitCode=2 Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.785370 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.788889 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f5f74b1-9667-421c-94e6-484208c36f55","Type":"ContainerDied","Data":"1950124cbfede488894c7094f1924801f760a455c9bfec4648bbad9c4e82e9bd"} Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.824206 4680 scope.go:117] "RemoveContainer" containerID="282b0c8b16dbe8c26600cef43b182b2463c51ac9c06f546093896fddfaf64a8e" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.835402 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-786b6cd966-svbnn"] Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.843748 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-786b6cd966-svbnn"] Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.909672 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.926166 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.942119 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 18:03:46 crc kubenswrapper[4680]: E0217 18:03:46.943565 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f41da1e-3f88-41c7-ab31-f3efdd7790c9" containerName="barbican-api" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.944827 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f41da1e-3f88-41c7-ab31-f3efdd7790c9" containerName="barbican-api" Feb 17 18:03:46 crc kubenswrapper[4680]: E0217 18:03:46.944886 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f11fb00-7f13-4757-9c5c-6261d32198a5" containerName="probe" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.944898 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f11fb00-7f13-4757-9c5c-6261d32198a5" containerName="probe" Feb 17 18:03:46 crc kubenswrapper[4680]: E0217 18:03:46.944935 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f41da1e-3f88-41c7-ab31-f3efdd7790c9" containerName="barbican-api-log" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.944945 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f41da1e-3f88-41c7-ab31-f3efdd7790c9" containerName="barbican-api-log" Feb 17 18:03:46 crc kubenswrapper[4680]: E0217 18:03:46.944969 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f11fb00-7f13-4757-9c5c-6261d32198a5" containerName="cinder-scheduler" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.944977 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f11fb00-7f13-4757-9c5c-6261d32198a5" containerName="cinder-scheduler" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.946366 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f41da1e-3f88-41c7-ab31-f3efdd7790c9" containerName="barbican-api-log" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.946416 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f41da1e-3f88-41c7-ab31-f3efdd7790c9" containerName="barbican-api" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.946455 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f11fb00-7f13-4757-9c5c-6261d32198a5" containerName="probe" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.946477 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f11fb00-7f13-4757-9c5c-6261d32198a5" containerName="cinder-scheduler" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.956702 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.961628 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 18:03:46 crc kubenswrapper[4680]: I0217 18:03:46.965578 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.042557 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7171f050-7893-4d68-b65a-9425776b1301-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.042602 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2m64\" (UniqueName: \"kubernetes.io/projected/7171f050-7893-4d68-b65a-9425776b1301-kube-api-access-q2m64\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.042634 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7171f050-7893-4d68-b65a-9425776b1301-config-data\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.042715 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7171f050-7893-4d68-b65a-9425776b1301-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.042738 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7171f050-7893-4d68-b65a-9425776b1301-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.042763 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7171f050-7893-4d68-b65a-9425776b1301-scripts\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.120843 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f11fb00-7f13-4757-9c5c-6261d32198a5" path="/var/lib/kubelet/pods/3f11fb00-7f13-4757-9c5c-6261d32198a5/volumes" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.121778 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f41da1e-3f88-41c7-ab31-f3efdd7790c9" path="/var/lib/kubelet/pods/7f41da1e-3f88-41c7-ab31-f3efdd7790c9/volumes" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.148322 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7171f050-7893-4d68-b65a-9425776b1301-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.148389 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7171f050-7893-4d68-b65a-9425776b1301-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.148421 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7171f050-7893-4d68-b65a-9425776b1301-scripts\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.148494 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7171f050-7893-4d68-b65a-9425776b1301-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.148509 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2m64\" (UniqueName: \"kubernetes.io/projected/7171f050-7893-4d68-b65a-9425776b1301-kube-api-access-q2m64\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.148532 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7171f050-7893-4d68-b65a-9425776b1301-config-data\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.150621 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7171f050-7893-4d68-b65a-9425776b1301-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.162255 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7171f050-7893-4d68-b65a-9425776b1301-scripts\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.162862 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7171f050-7893-4d68-b65a-9425776b1301-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.162883 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7171f050-7893-4d68-b65a-9425776b1301-config-data\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.177913 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2m64\" (UniqueName: \"kubernetes.io/projected/7171f050-7893-4d68-b65a-9425776b1301-kube-api-access-q2m64\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.178314 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7171f050-7893-4d68-b65a-9425776b1301-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7171f050-7893-4d68-b65a-9425776b1301\") " pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.273412 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.798072 4680 generic.go:334] "Generic (PLEG): container finished" podID="be0e9165-8f19-438f-a422-eb09778329be" containerID="98cfbb44b49ce52953de041f8e9706e8267bf17c6b93a04705aa6e2f5b08c269" exitCode=0 Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.798173 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cfbd9cd46-qsg4t" event={"ID":"be0e9165-8f19-438f-a422-eb09778329be","Type":"ContainerDied","Data":"98cfbb44b49ce52953de041f8e9706e8267bf17c6b93a04705aa6e2f5b08c269"} Feb 17 18:03:47 crc kubenswrapper[4680]: I0217 18:03:47.836735 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 18:03:48 crc kubenswrapper[4680]: I0217 18:03:48.824934 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7171f050-7893-4d68-b65a-9425776b1301","Type":"ContainerStarted","Data":"6d99323e3730d9f407ba122738797ee55ae0532db555dead53cf2b92413b9e6d"} Feb 17 18:03:48 crc kubenswrapper[4680]: I0217 18:03:48.825290 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7171f050-7893-4d68-b65a-9425776b1301","Type":"ContainerStarted","Data":"69b483622ab4788ef13f7961680d139121e175ff3be2977f6184fe1a1de4c5a3"} Feb 17 18:03:49 crc kubenswrapper[4680]: I0217 18:03:49.145251 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-gvq9l" podUID="cb8cfca8-b7f0-4b3f-85f0-85b9c965cd9b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: i/o timeout" Feb 17 18:03:49 crc kubenswrapper[4680]: I0217 18:03:49.835212 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7171f050-7893-4d68-b65a-9425776b1301","Type":"ContainerStarted","Data":"348df59f9b3e2e591f807bd4533df5edd03264da885637d239c4740f69edee8d"} Feb 17 18:03:49 crc kubenswrapper[4680]: I0217 18:03:49.856197 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.856178046 podStartE2EDuration="3.856178046s" podCreationTimestamp="2026-02-17 18:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:03:49.852853692 +0000 UTC m=+1159.403752381" watchObservedRunningTime="2026-02-17 18:03:49.856178046 +0000 UTC m=+1159.407076725" Feb 17 18:03:51 crc kubenswrapper[4680]: I0217 18:03:51.912387 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5c85cf9547-qhbrg"] Feb 17 18:03:51 crc kubenswrapper[4680]: I0217 18:03:51.914221 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:51 crc kubenswrapper[4680]: I0217 18:03:51.928697 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 17 18:03:51 crc kubenswrapper[4680]: I0217 18:03:51.928738 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 17 18:03:51 crc kubenswrapper[4680]: I0217 18:03:51.929080 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 18:03:51 crc kubenswrapper[4680]: I0217 18:03:51.932974 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5c85cf9547-qhbrg"] Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.062991 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-combined-ca-bundle\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.063051 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-config-data\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.063079 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-run-httpd\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.063165 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvzbg\" (UniqueName: \"kubernetes.io/projected/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-kube-api-access-gvzbg\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.063210 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-etc-swift\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.063234 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-public-tls-certs\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.063268 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-internal-tls-certs\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.063364 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-log-httpd\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.165648 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-combined-ca-bundle\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.166037 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-config-data\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.166074 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-run-httpd\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.166280 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvzbg\" (UniqueName: \"kubernetes.io/projected/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-kube-api-access-gvzbg\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.167200 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-etc-swift\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.167265 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-public-tls-certs\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.167368 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-internal-tls-certs\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.167459 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-log-httpd\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.172633 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-log-httpd\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.173293 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-run-httpd\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.185826 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-config-data\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.191648 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvzbg\" (UniqueName: \"kubernetes.io/projected/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-kube-api-access-gvzbg\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.196138 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-public-tls-certs\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.196245 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-internal-tls-certs\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.198197 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-etc-swift\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.203122 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8411d6-1f53-4a7f-8528-0b3719bc5a6f-combined-ca-bundle\") pod \"swift-proxy-5c85cf9547-qhbrg\" (UID: \"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f\") " pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.260780 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.274408 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.412497 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.474882 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-scripts\") pod \"7f5f74b1-9667-421c-94e6-484208c36f55\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.475286 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-config-data\") pod \"7f5f74b1-9667-421c-94e6-484208c36f55\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.475356 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-combined-ca-bundle\") pod \"7f5f74b1-9667-421c-94e6-484208c36f55\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.475451 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-sg-core-conf-yaml\") pod \"7f5f74b1-9667-421c-94e6-484208c36f55\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.475479 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw45w\" (UniqueName: \"kubernetes.io/projected/7f5f74b1-9667-421c-94e6-484208c36f55-kube-api-access-bw45w\") pod \"7f5f74b1-9667-421c-94e6-484208c36f55\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.475521 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f5f74b1-9667-421c-94e6-484208c36f55-run-httpd\") pod \"7f5f74b1-9667-421c-94e6-484208c36f55\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.475540 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f5f74b1-9667-421c-94e6-484208c36f55-log-httpd\") pod \"7f5f74b1-9667-421c-94e6-484208c36f55\" (UID: \"7f5f74b1-9667-421c-94e6-484208c36f55\") " Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.476679 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f5f74b1-9667-421c-94e6-484208c36f55-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7f5f74b1-9667-421c-94e6-484208c36f55" (UID: "7f5f74b1-9667-421c-94e6-484208c36f55"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.482828 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-scripts" (OuterVolumeSpecName: "scripts") pod "7f5f74b1-9667-421c-94e6-484208c36f55" (UID: "7f5f74b1-9667-421c-94e6-484208c36f55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.486076 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f5f74b1-9667-421c-94e6-484208c36f55-kube-api-access-bw45w" (OuterVolumeSpecName: "kube-api-access-bw45w") pod "7f5f74b1-9667-421c-94e6-484208c36f55" (UID: "7f5f74b1-9667-421c-94e6-484208c36f55"). InnerVolumeSpecName "kube-api-access-bw45w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.486442 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f5f74b1-9667-421c-94e6-484208c36f55-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7f5f74b1-9667-421c-94e6-484208c36f55" (UID: "7f5f74b1-9667-421c-94e6-484208c36f55"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.532928 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7f5f74b1-9667-421c-94e6-484208c36f55" (UID: "7f5f74b1-9667-421c-94e6-484208c36f55"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.533781 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f5f74b1-9667-421c-94e6-484208c36f55" (UID: "7f5f74b1-9667-421c-94e6-484208c36f55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.561163 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-config-data" (OuterVolumeSpecName: "config-data") pod "7f5f74b1-9667-421c-94e6-484208c36f55" (UID: "7f5f74b1-9667-421c-94e6-484208c36f55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.578027 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.578061 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.578073 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.578088 4680 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f5f74b1-9667-421c-94e6-484208c36f55-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.578097 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw45w\" (UniqueName: \"kubernetes.io/projected/7f5f74b1-9667-421c-94e6-484208c36f55-kube-api-access-bw45w\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.578107 4680 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f5f74b1-9667-421c-94e6-484208c36f55-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.578117 4680 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f5f74b1-9667-421c-94e6-484208c36f55-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.874702 4680 generic.go:334] "Generic (PLEG): container finished" podID="7f5f74b1-9667-421c-94e6-484208c36f55" containerID="17d510f98f9ce9a63a955c0961a6b17e0c210d04faf62622027c8f851a32bbab" exitCode=0 Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.874758 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f5f74b1-9667-421c-94e6-484208c36f55","Type":"ContainerDied","Data":"17d510f98f9ce9a63a955c0961a6b17e0c210d04faf62622027c8f851a32bbab"} Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.874788 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f5f74b1-9667-421c-94e6-484208c36f55","Type":"ContainerDied","Data":"cb6b02e647128bc7add226d18b92b64306f1cff3860e100908fd44ed5671ed59"} Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.875000 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.875815 4680 scope.go:117] "RemoveContainer" containerID="1950124cbfede488894c7094f1924801f760a455c9bfec4648bbad9c4e82e9bd" Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.944833 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5c85cf9547-qhbrg"] Feb 17 18:03:52 crc kubenswrapper[4680]: I0217 18:03:52.993418 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.005165 4680 scope.go:117] "RemoveContainer" containerID="17d510f98f9ce9a63a955c0961a6b17e0c210d04faf62622027c8f851a32bbab" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.022059 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.034188 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:03:53 crc kubenswrapper[4680]: E0217 18:03:53.034702 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5f74b1-9667-421c-94e6-484208c36f55" containerName="sg-core" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.034721 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5f74b1-9667-421c-94e6-484208c36f55" containerName="sg-core" Feb 17 18:03:53 crc kubenswrapper[4680]: E0217 18:03:53.034746 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5f74b1-9667-421c-94e6-484208c36f55" containerName="ceilometer-notification-agent" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.034752 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5f74b1-9667-421c-94e6-484208c36f55" containerName="ceilometer-notification-agent" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.034917 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5f74b1-9667-421c-94e6-484208c36f55" containerName="sg-core" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.034941 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5f74b1-9667-421c-94e6-484208c36f55" containerName="ceilometer-notification-agent" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.036799 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.042852 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.046169 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.046303 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.087951 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.088211 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.088399 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-config-data\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.088581 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986e1325-3d30-40e7-b433-64a45965a205-run-httpd\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.088669 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986e1325-3d30-40e7-b433-64a45965a205-log-httpd\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.088736 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m7mp\" (UniqueName: \"kubernetes.io/projected/986e1325-3d30-40e7-b433-64a45965a205-kube-api-access-9m7mp\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.089641 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-scripts\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.105756 4680 scope.go:117] "RemoveContainer" containerID="1950124cbfede488894c7094f1924801f760a455c9bfec4648bbad9c4e82e9bd" Feb 17 18:03:53 crc kubenswrapper[4680]: E0217 18:03:53.111598 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1950124cbfede488894c7094f1924801f760a455c9bfec4648bbad9c4e82e9bd\": container with ID starting with 1950124cbfede488894c7094f1924801f760a455c9bfec4648bbad9c4e82e9bd not found: ID does not exist" containerID="1950124cbfede488894c7094f1924801f760a455c9bfec4648bbad9c4e82e9bd" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.111647 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1950124cbfede488894c7094f1924801f760a455c9bfec4648bbad9c4e82e9bd"} err="failed to get container status \"1950124cbfede488894c7094f1924801f760a455c9bfec4648bbad9c4e82e9bd\": rpc error: code = NotFound desc = could not find container \"1950124cbfede488894c7094f1924801f760a455c9bfec4648bbad9c4e82e9bd\": container with ID starting with 1950124cbfede488894c7094f1924801f760a455c9bfec4648bbad9c4e82e9bd not found: ID does not exist" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.111687 4680 scope.go:117] "RemoveContainer" containerID="17d510f98f9ce9a63a955c0961a6b17e0c210d04faf62622027c8f851a32bbab" Feb 17 18:03:53 crc kubenswrapper[4680]: E0217 18:03:53.117659 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17d510f98f9ce9a63a955c0961a6b17e0c210d04faf62622027c8f851a32bbab\": container with ID starting with 17d510f98f9ce9a63a955c0961a6b17e0c210d04faf62622027c8f851a32bbab not found: ID does not exist" containerID="17d510f98f9ce9a63a955c0961a6b17e0c210d04faf62622027c8f851a32bbab" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.117971 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17d510f98f9ce9a63a955c0961a6b17e0c210d04faf62622027c8f851a32bbab"} err="failed to get container status \"17d510f98f9ce9a63a955c0961a6b17e0c210d04faf62622027c8f851a32bbab\": rpc error: code = NotFound desc = could not find container \"17d510f98f9ce9a63a955c0961a6b17e0c210d04faf62622027c8f851a32bbab\": container with ID starting with 17d510f98f9ce9a63a955c0961a6b17e0c210d04faf62622027c8f851a32bbab not found: ID does not exist" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.120718 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f5f74b1-9667-421c-94e6-484208c36f55" path="/var/lib/kubelet/pods/7f5f74b1-9667-421c-94e6-484208c36f55/volumes" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.195383 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.195449 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-config-data\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.195487 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986e1325-3d30-40e7-b433-64a45965a205-run-httpd\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.195519 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986e1325-3d30-40e7-b433-64a45965a205-log-httpd\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.195537 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m7mp\" (UniqueName: \"kubernetes.io/projected/986e1325-3d30-40e7-b433-64a45965a205-kube-api-access-9m7mp\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.195573 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-scripts\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.195617 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.196349 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986e1325-3d30-40e7-b433-64a45965a205-run-httpd\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.197388 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986e1325-3d30-40e7-b433-64a45965a205-log-httpd\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.202894 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.203068 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-scripts\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.202932 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.207457 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-config-data\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.217650 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m7mp\" (UniqueName: \"kubernetes.io/projected/986e1325-3d30-40e7-b433-64a45965a205-kube-api-access-9m7mp\") pod \"ceilometer-0\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.223159 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.838280 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.907545 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986e1325-3d30-40e7-b433-64a45965a205","Type":"ContainerStarted","Data":"e24cdec87dc1df3b4b070368e0c5c250885c989a9dd775681e6a1cc6d5d18b4d"} Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.937552 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c85cf9547-qhbrg" event={"ID":"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f","Type":"ContainerStarted","Data":"c892bcc3e916a1863457287a75cdf1a86e525d4bab019e2adbf1cca6cce68ba4"} Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.937597 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c85cf9547-qhbrg" event={"ID":"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f","Type":"ContainerStarted","Data":"24d8318f8945c1a0e627344d9812d2d19e7d86b7352bf21b2dbb3e72225fb0e2"} Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.937608 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c85cf9547-qhbrg" event={"ID":"fc8411d6-1f53-4a7f-8528-0b3719bc5a6f","Type":"ContainerStarted","Data":"597f081a1ad60117ef201f972d31cfe6e53e3aa9a133c3c0445af3da9b96de04"} Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.938890 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.938919 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:03:53 crc kubenswrapper[4680]: I0217 18:03:53.983764 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5c85cf9547-qhbrg" podStartSLOduration=2.9837423960000002 podStartE2EDuration="2.983742396s" podCreationTimestamp="2026-02-17 18:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:03:53.974686962 +0000 UTC m=+1163.525585661" watchObservedRunningTime="2026-02-17 18:03:53.983742396 +0000 UTC m=+1163.534641075" Feb 17 18:03:54 crc kubenswrapper[4680]: I0217 18:03:54.952469 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986e1325-3d30-40e7-b433-64a45965a205","Type":"ContainerStarted","Data":"746d627d565816f79c6f47430b25bf1457d8168c311147e7c88d42b9add4bb74"} Feb 17 18:03:55 crc kubenswrapper[4680]: I0217 18:03:55.519784 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:03:56 crc kubenswrapper[4680]: I0217 18:03:56.972109 4680 generic.go:334] "Generic (PLEG): container finished" podID="be0e9165-8f19-438f-a422-eb09778329be" containerID="02b9ab7a9af5c8fa6cb72f82609c11288d9fa2dd66d2b2518e3e3be36cfabb87" exitCode=0 Feb 17 18:03:56 crc kubenswrapper[4680]: I0217 18:03:56.972199 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cfbd9cd46-qsg4t" event={"ID":"be0e9165-8f19-438f-a422-eb09778329be","Type":"ContainerDied","Data":"02b9ab7a9af5c8fa6cb72f82609c11288d9fa2dd66d2b2518e3e3be36cfabb87"} Feb 17 18:03:57 crc kubenswrapper[4680]: I0217 18:03:57.555840 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 18:04:00 crc kubenswrapper[4680]: I0217 18:04:00.005692 4680 generic.go:334] "Generic (PLEG): container finished" podID="91980b55-1251-4404-a22d-1572dd91ec8a" containerID="f3a8f4a5fdca81edfa583020c0921b198bd2adb68d7e11153d1ee645737ca8c6" exitCode=137 Feb 17 18:04:00 crc kubenswrapper[4680]: I0217 18:04:00.005775 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6978d4c7cb-4k7vb" event={"ID":"91980b55-1251-4404-a22d-1572dd91ec8a","Type":"ContainerDied","Data":"f3a8f4a5fdca81edfa583020c0921b198bd2adb68d7e11153d1ee645737ca8c6"} Feb 17 18:04:00 crc kubenswrapper[4680]: I0217 18:04:00.010153 4680 generic.go:334] "Generic (PLEG): container finished" podID="d7b7455b-130d-4711-9174-16e352b16455" containerID="8b01e5dcc6cc0527ee746e19a1edb564cad531d2e71631a44d11071138f3ae62" exitCode=137 Feb 17 18:04:00 crc kubenswrapper[4680]: I0217 18:04:00.010188 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8ccfcccc8-2qwb2" event={"ID":"d7b7455b-130d-4711-9174-16e352b16455","Type":"ContainerDied","Data":"8b01e5dcc6cc0527ee746e19a1edb564cad531d2e71631a44d11071138f3ae62"} Feb 17 18:04:00 crc kubenswrapper[4680]: E0217 18:04:00.677301 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f5f74b1_9667_421c_94e6_484208c36f55.slice/crio-conmon-17d510f98f9ce9a63a955c0961a6b17e0c210d04faf62622027c8f851a32bbab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode60fee03_af47_4ec5_b656_06b344e31568.slice/crio-3a3f7defff16075c89f4efbb4a4fb2976304e553a307c5b2b5e06a44242ffe5b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f41da1e_3f88_41c7_ab31_f3efdd7790c9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7b7455b_130d_4711_9174_16e352b16455.slice/crio-8b01e5dcc6cc0527ee746e19a1edb564cad531d2e71631a44d11071138f3ae62.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f11fb00_7f13_4757_9c5c_6261d32198a5.slice/crio-72342d6befb03a9dc7f69140e4cf7e4a730116e54dcfe0e4cd765094e1c24088\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91980b55_1251_4404_a22d_1572dd91ec8a.slice/crio-f3a8f4a5fdca81edfa583020c0921b198bd2adb68d7e11153d1ee645737ca8c6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe0e9165_8f19_438f_a422_eb09778329be.slice/crio-98cfbb44b49ce52953de041f8e9706e8267bf17c6b93a04705aa6e2f5b08c269.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f11fb00_7f13_4757_9c5c_6261d32198a5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f41da1e_3f88_41c7_ab31_f3efdd7790c9.slice/crio-b3c4ed2d7901ef4768b42fa658b3cd4497b1a922b0ab45260a8f66ec7f59c3fa\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f5f74b1_9667_421c_94e6_484208c36f55.slice/crio-cb6b02e647128bc7add226d18b92b64306f1cff3860e100908fd44ed5671ed59\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7b7455b_130d_4711_9174_16e352b16455.slice/crio-conmon-8b01e5dcc6cc0527ee746e19a1edb564cad531d2e71631a44d11071138f3ae62.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe0e9165_8f19_438f_a422_eb09778329be.slice/crio-conmon-98cfbb44b49ce52953de041f8e9706e8267bf17c6b93a04705aa6e2f5b08c269.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe0e9165_8f19_438f_a422_eb09778329be.slice/crio-conmon-02b9ab7a9af5c8fa6cb72f82609c11288d9fa2dd66d2b2518e3e3be36cfabb87.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f5f74b1_9667_421c_94e6_484208c36f55.slice/crio-17d510f98f9ce9a63a955c0961a6b17e0c210d04faf62622027c8f851a32bbab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode60fee03_af47_4ec5_b656_06b344e31568.slice/crio-conmon-3a3f7defff16075c89f4efbb4a4fb2976304e553a307c5b2b5e06a44242ffe5b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe0e9165_8f19_438f_a422_eb09778329be.slice/crio-02b9ab7a9af5c8fa6cb72f82609c11288d9fa2dd66d2b2518e3e3be36cfabb87.scope\": RecentStats: unable to find data in memory cache]" Feb 17 18:04:01 crc kubenswrapper[4680]: I0217 18:04:01.259977 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="e60fee03-af47-4ec5-b656-06b344e31568" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.165:8776/healthcheck\": dial tcp 10.217.0.165:8776: connect: connection refused" Feb 17 18:04:01 crc kubenswrapper[4680]: I0217 18:04:01.270374 4680 generic.go:334] "Generic (PLEG): container finished" podID="e60fee03-af47-4ec5-b656-06b344e31568" containerID="3a3f7defff16075c89f4efbb4a4fb2976304e553a307c5b2b5e06a44242ffe5b" exitCode=137 Feb 17 18:04:01 crc kubenswrapper[4680]: I0217 18:04:01.270421 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e60fee03-af47-4ec5-b656-06b344e31568","Type":"ContainerDied","Data":"3a3f7defff16075c89f4efbb4a4fb2976304e553a307c5b2b5e06a44242ffe5b"} Feb 17 18:04:02 crc kubenswrapper[4680]: I0217 18:04:02.408833 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:04:02 crc kubenswrapper[4680]: I0217 18:04:02.429973 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5c85cf9547-qhbrg" Feb 17 18:04:05 crc kubenswrapper[4680]: E0217 18:04:05.219545 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Feb 17 18:04:05 crc kubenswrapper[4680]: E0217 18:04:05.219987 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f4h597hb6h579h597h677hdch97h5d7h59chf7h555h6bh89h68fh5chcch644h646h54fh56dhf5h5ddh5b6h95h9fh5dbhc6h5bfh598h8dh54bq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgtgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(a0986f8d-da98-4f6e-bd4e-457e4f405022): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 18:04:05 crc kubenswrapper[4680]: E0217 18:04:05.221184 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="a0986f8d-da98-4f6e-bd4e-457e4f405022" Feb 17 18:04:05 crc kubenswrapper[4680]: E0217 18:04:05.357757 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="a0986f8d-da98-4f6e-bd4e-457e4f405022" Feb 17 18:04:05 crc kubenswrapper[4680]: I0217 18:04:05.824458 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:04:05 crc kubenswrapper[4680]: I0217 18:04:05.841434 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-config\") pod \"be0e9165-8f19-438f-a422-eb09778329be\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " Feb 17 18:04:05 crc kubenswrapper[4680]: I0217 18:04:05.841514 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-httpd-config\") pod \"be0e9165-8f19-438f-a422-eb09778329be\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " Feb 17 18:04:05 crc kubenswrapper[4680]: I0217 18:04:05.841630 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-ovndb-tls-certs\") pod \"be0e9165-8f19-438f-a422-eb09778329be\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " Feb 17 18:04:05 crc kubenswrapper[4680]: I0217 18:04:05.841699 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r97k\" (UniqueName: \"kubernetes.io/projected/be0e9165-8f19-438f-a422-eb09778329be-kube-api-access-9r97k\") pod \"be0e9165-8f19-438f-a422-eb09778329be\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " Feb 17 18:04:05 crc kubenswrapper[4680]: I0217 18:04:05.841716 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-combined-ca-bundle\") pod \"be0e9165-8f19-438f-a422-eb09778329be\" (UID: \"be0e9165-8f19-438f-a422-eb09778329be\") " Feb 17 18:04:05 crc kubenswrapper[4680]: I0217 18:04:05.881546 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "be0e9165-8f19-438f-a422-eb09778329be" (UID: "be0e9165-8f19-438f-a422-eb09778329be"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:05 crc kubenswrapper[4680]: I0217 18:04:05.904822 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be0e9165-8f19-438f-a422-eb09778329be-kube-api-access-9r97k" (OuterVolumeSpecName: "kube-api-access-9r97k") pod "be0e9165-8f19-438f-a422-eb09778329be" (UID: "be0e9165-8f19-438f-a422-eb09778329be"). InnerVolumeSpecName "kube-api-access-9r97k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:04:05 crc kubenswrapper[4680]: I0217 18:04:05.946649 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r97k\" (UniqueName: \"kubernetes.io/projected/be0e9165-8f19-438f-a422-eb09778329be-kube-api-access-9r97k\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:05 crc kubenswrapper[4680]: I0217 18:04:05.946685 4680 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.035585 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.070048 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "be0e9165-8f19-438f-a422-eb09778329be" (UID: "be0e9165-8f19-438f-a422-eb09778329be"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.072068 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be0e9165-8f19-438f-a422-eb09778329be" (UID: "be0e9165-8f19-438f-a422-eb09778329be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.081459 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-config" (OuterVolumeSpecName: "config") pod "be0e9165-8f19-438f-a422-eb09778329be" (UID: "be0e9165-8f19-438f-a422-eb09778329be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.151828 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e60fee03-af47-4ec5-b656-06b344e31568-etc-machine-id\") pod \"e60fee03-af47-4ec5-b656-06b344e31568\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.151914 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-config-data-custom\") pod \"e60fee03-af47-4ec5-b656-06b344e31568\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.151940 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-config-data\") pod \"e60fee03-af47-4ec5-b656-06b344e31568\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.152048 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e60fee03-af47-4ec5-b656-06b344e31568-logs\") pod \"e60fee03-af47-4ec5-b656-06b344e31568\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.152148 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-combined-ca-bundle\") pod \"e60fee03-af47-4ec5-b656-06b344e31568\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.152240 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqxct\" (UniqueName: \"kubernetes.io/projected/e60fee03-af47-4ec5-b656-06b344e31568-kube-api-access-cqxct\") pod \"e60fee03-af47-4ec5-b656-06b344e31568\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.152275 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-scripts\") pod \"e60fee03-af47-4ec5-b656-06b344e31568\" (UID: \"e60fee03-af47-4ec5-b656-06b344e31568\") " Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.152448 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e60fee03-af47-4ec5-b656-06b344e31568-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e60fee03-af47-4ec5-b656-06b344e31568" (UID: "e60fee03-af47-4ec5-b656-06b344e31568"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.153177 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e60fee03-af47-4ec5-b656-06b344e31568-logs" (OuterVolumeSpecName: "logs") pod "e60fee03-af47-4ec5-b656-06b344e31568" (UID: "e60fee03-af47-4ec5-b656-06b344e31568"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.155419 4680 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e60fee03-af47-4ec5-b656-06b344e31568-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.155478 4680 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.155530 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.155897 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/be0e9165-8f19-438f-a422-eb09778329be-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.166164 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e60fee03-af47-4ec5-b656-06b344e31568-kube-api-access-cqxct" (OuterVolumeSpecName: "kube-api-access-cqxct") pod "e60fee03-af47-4ec5-b656-06b344e31568" (UID: "e60fee03-af47-4ec5-b656-06b344e31568"). InnerVolumeSpecName "kube-api-access-cqxct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.166424 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-scripts" (OuterVolumeSpecName: "scripts") pod "e60fee03-af47-4ec5-b656-06b344e31568" (UID: "e60fee03-af47-4ec5-b656-06b344e31568"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.193710 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e60fee03-af47-4ec5-b656-06b344e31568" (UID: "e60fee03-af47-4ec5-b656-06b344e31568"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.215594 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e60fee03-af47-4ec5-b656-06b344e31568" (UID: "e60fee03-af47-4ec5-b656-06b344e31568"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.256457 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-config-data" (OuterVolumeSpecName: "config-data") pod "e60fee03-af47-4ec5-b656-06b344e31568" (UID: "e60fee03-af47-4ec5-b656-06b344e31568"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.258626 4680 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.259283 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.259297 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e60fee03-af47-4ec5-b656-06b344e31568-logs\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.259308 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.259334 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqxct\" (UniqueName: \"kubernetes.io/projected/e60fee03-af47-4ec5-b656-06b344e31568-kube-api-access-cqxct\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.259346 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e60fee03-af47-4ec5-b656-06b344e31568-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.373661 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8ccfcccc8-2qwb2" event={"ID":"d7b7455b-130d-4711-9174-16e352b16455","Type":"ContainerStarted","Data":"bb43b7d8073115a72c22a9eb5e451ca9ee4e1652249db208b2313af6b24dab59"} Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.384114 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cfbd9cd46-qsg4t" event={"ID":"be0e9165-8f19-438f-a422-eb09778329be","Type":"ContainerDied","Data":"c42ff3fac53ae68e782ee04dfbeb465ecfdfebd1e63bdf3ab97997bb74b50f20"} Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.384207 4680 scope.go:117] "RemoveContainer" containerID="98cfbb44b49ce52953de041f8e9706e8267bf17c6b93a04705aa6e2f5b08c269" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.384131 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cfbd9cd46-qsg4t" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.405389 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.405461 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e60fee03-af47-4ec5-b656-06b344e31568","Type":"ContainerDied","Data":"fef3cc5b737d3085bdb9cfb41d0177fe49939a3183b1949ff91bd664f9996e69"} Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.412288 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6978d4c7cb-4k7vb" event={"ID":"91980b55-1251-4404-a22d-1572dd91ec8a","Type":"ContainerStarted","Data":"be39c4202a24420b08980e7038a74edf42f9d8621ddfe20cd668edac6acbb7d2"} Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.417581 4680 scope.go:117] "RemoveContainer" containerID="02b9ab7a9af5c8fa6cb72f82609c11288d9fa2dd66d2b2518e3e3be36cfabb87" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.422542 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986e1325-3d30-40e7-b433-64a45965a205","Type":"ContainerStarted","Data":"440b835f6397c01b13911816cffa50944424acfdfe6b6b63d3d4601d56245662"} Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.468896 4680 scope.go:117] "RemoveContainer" containerID="3a3f7defff16075c89f4efbb4a4fb2976304e553a307c5b2b5e06a44242ffe5b" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.506377 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.529118 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.559841 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7cfbd9cd46-qsg4t"] Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.568461 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7cfbd9cd46-qsg4t"] Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.581089 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 18:04:06 crc kubenswrapper[4680]: E0217 18:04:06.581747 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be0e9165-8f19-438f-a422-eb09778329be" containerName="neutron-api" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.581870 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0e9165-8f19-438f-a422-eb09778329be" containerName="neutron-api" Feb 17 18:04:06 crc kubenswrapper[4680]: E0217 18:04:06.581960 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be0e9165-8f19-438f-a422-eb09778329be" containerName="neutron-httpd" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.582031 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0e9165-8f19-438f-a422-eb09778329be" containerName="neutron-httpd" Feb 17 18:04:06 crc kubenswrapper[4680]: E0217 18:04:06.582117 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e60fee03-af47-4ec5-b656-06b344e31568" containerName="cinder-api-log" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.582189 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e60fee03-af47-4ec5-b656-06b344e31568" containerName="cinder-api-log" Feb 17 18:04:06 crc kubenswrapper[4680]: E0217 18:04:06.582283 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e60fee03-af47-4ec5-b656-06b344e31568" containerName="cinder-api" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.582419 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e60fee03-af47-4ec5-b656-06b344e31568" containerName="cinder-api" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.582703 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e60fee03-af47-4ec5-b656-06b344e31568" containerName="cinder-api" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.582785 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="be0e9165-8f19-438f-a422-eb09778329be" containerName="neutron-httpd" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.582852 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e60fee03-af47-4ec5-b656-06b344e31568" containerName="cinder-api-log" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.582937 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="be0e9165-8f19-438f-a422-eb09778329be" containerName="neutron-api" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.584212 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.588994 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.602966 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.603371 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.603519 4680 scope.go:117] "RemoveContainer" containerID="b8b3f76aa8ab3fe2957a31ebc341d75d60a5b6dbf611bf579ae796a779c010f0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.603949 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.770408 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-config-data\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.770460 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-scripts\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.770599 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6lg5\" (UniqueName: \"kubernetes.io/projected/878b21ac-092c-4e88-ba08-f59d0b220d79-kube-api-access-m6lg5\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.770694 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.770789 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-public-tls-certs\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.770841 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/878b21ac-092c-4e88-ba08-f59d0b220d79-etc-machine-id\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.770929 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-config-data-custom\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.770996 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/878b21ac-092c-4e88-ba08-f59d0b220d79-logs\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.771123 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.872367 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.872919 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-config-data\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.872995 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-scripts\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.873083 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6lg5\" (UniqueName: \"kubernetes.io/projected/878b21ac-092c-4e88-ba08-f59d0b220d79-kube-api-access-m6lg5\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.873182 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.873296 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-public-tls-certs\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.873401 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/878b21ac-092c-4e88-ba08-f59d0b220d79-etc-machine-id\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.873517 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-config-data-custom\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.873610 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/878b21ac-092c-4e88-ba08-f59d0b220d79-logs\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.874104 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/878b21ac-092c-4e88-ba08-f59d0b220d79-logs\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.875573 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/878b21ac-092c-4e88-ba08-f59d0b220d79-etc-machine-id\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.881242 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.882004 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-config-data\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.882753 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-public-tls-certs\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.883482 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.892707 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-scripts\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.894842 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/878b21ac-092c-4e88-ba08-f59d0b220d79-config-data-custom\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.909210 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6lg5\" (UniqueName: \"kubernetes.io/projected/878b21ac-092c-4e88-ba08-f59d0b220d79-kube-api-access-m6lg5\") pod \"cinder-api-0\" (UID: \"878b21ac-092c-4e88-ba08-f59d0b220d79\") " pod="openstack/cinder-api-0" Feb 17 18:04:06 crc kubenswrapper[4680]: I0217 18:04:06.950805 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 18:04:07 crc kubenswrapper[4680]: I0217 18:04:07.143949 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be0e9165-8f19-438f-a422-eb09778329be" path="/var/lib/kubelet/pods/be0e9165-8f19-438f-a422-eb09778329be/volumes" Feb 17 18:04:07 crc kubenswrapper[4680]: I0217 18:04:07.144658 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e60fee03-af47-4ec5-b656-06b344e31568" path="/var/lib/kubelet/pods/e60fee03-af47-4ec5-b656-06b344e31568/volumes" Feb 17 18:04:07 crc kubenswrapper[4680]: I0217 18:04:07.436618 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986e1325-3d30-40e7-b433-64a45965a205","Type":"ContainerStarted","Data":"9cc85ee24eaa25c771b33f03c6bfd5018a9905ecc29506c64a535d4defccb51e"} Feb 17 18:04:07 crc kubenswrapper[4680]: I0217 18:04:07.520337 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 18:04:08 crc kubenswrapper[4680]: I0217 18:04:08.473953 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986e1325-3d30-40e7-b433-64a45965a205","Type":"ContainerStarted","Data":"36c2f67b03f2cfdc0eaf0a3010b07b9ed3f91d665fdccf83cca1c1217cdc7b0a"} Feb 17 18:04:08 crc kubenswrapper[4680]: I0217 18:04:08.474469 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="986e1325-3d30-40e7-b433-64a45965a205" containerName="ceilometer-central-agent" containerID="cri-o://746d627d565816f79c6f47430b25bf1457d8168c311147e7c88d42b9add4bb74" gracePeriod=30 Feb 17 18:04:08 crc kubenswrapper[4680]: I0217 18:04:08.474546 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 18:04:08 crc kubenswrapper[4680]: I0217 18:04:08.474558 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="986e1325-3d30-40e7-b433-64a45965a205" containerName="proxy-httpd" containerID="cri-o://36c2f67b03f2cfdc0eaf0a3010b07b9ed3f91d665fdccf83cca1c1217cdc7b0a" gracePeriod=30 Feb 17 18:04:08 crc kubenswrapper[4680]: I0217 18:04:08.474609 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="986e1325-3d30-40e7-b433-64a45965a205" containerName="sg-core" containerID="cri-o://9cc85ee24eaa25c771b33f03c6bfd5018a9905ecc29506c64a535d4defccb51e" gracePeriod=30 Feb 17 18:04:08 crc kubenswrapper[4680]: I0217 18:04:08.474647 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="986e1325-3d30-40e7-b433-64a45965a205" containerName="ceilometer-notification-agent" containerID="cri-o://440b835f6397c01b13911816cffa50944424acfdfe6b6b63d3d4601d56245662" gracePeriod=30 Feb 17 18:04:08 crc kubenswrapper[4680]: I0217 18:04:08.484110 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"878b21ac-092c-4e88-ba08-f59d0b220d79","Type":"ContainerStarted","Data":"8ca5873e09d434f29a753ddaa9e26da39864a07532e9ed26ad4fbacdf5eec4d6"} Feb 17 18:04:08 crc kubenswrapper[4680]: I0217 18:04:08.484178 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"878b21ac-092c-4e88-ba08-f59d0b220d79","Type":"ContainerStarted","Data":"9242e810ea2e00b9f108ae2e36e677eb695cd36a71ca2962e62beba654c70e29"} Feb 17 18:04:08 crc kubenswrapper[4680]: I0217 18:04:08.505102 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.263958952 podStartE2EDuration="16.505086749s" podCreationTimestamp="2026-02-17 18:03:52 +0000 UTC" firstStartedPulling="2026-02-17 18:03:53.823522228 +0000 UTC m=+1163.374420907" lastFinishedPulling="2026-02-17 18:04:08.064650025 +0000 UTC m=+1177.615548704" observedRunningTime="2026-02-17 18:04:08.50225023 +0000 UTC m=+1178.053148909" watchObservedRunningTime="2026-02-17 18:04:08.505086749 +0000 UTC m=+1178.055985428" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.201410 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.201740 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.415777 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.416992 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.507581 4680 generic.go:334] "Generic (PLEG): container finished" podID="986e1325-3d30-40e7-b433-64a45965a205" containerID="9cc85ee24eaa25c771b33f03c6bfd5018a9905ecc29506c64a535d4defccb51e" exitCode=2 Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.507612 4680 generic.go:334] "Generic (PLEG): container finished" podID="986e1325-3d30-40e7-b433-64a45965a205" containerID="440b835f6397c01b13911816cffa50944424acfdfe6b6b63d3d4601d56245662" exitCode=0 Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.507620 4680 generic.go:334] "Generic (PLEG): container finished" podID="986e1325-3d30-40e7-b433-64a45965a205" containerID="746d627d565816f79c6f47430b25bf1457d8168c311147e7c88d42b9add4bb74" exitCode=0 Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.507655 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986e1325-3d30-40e7-b433-64a45965a205","Type":"ContainerDied","Data":"9cc85ee24eaa25c771b33f03c6bfd5018a9905ecc29506c64a535d4defccb51e"} Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.507708 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986e1325-3d30-40e7-b433-64a45965a205","Type":"ContainerDied","Data":"440b835f6397c01b13911816cffa50944424acfdfe6b6b63d3d4601d56245662"} Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.507724 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986e1325-3d30-40e7-b433-64a45965a205","Type":"ContainerDied","Data":"746d627d565816f79c6f47430b25bf1457d8168c311147e7c88d42b9add4bb74"} Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.515216 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"878b21ac-092c-4e88-ba08-f59d0b220d79","Type":"ContainerStarted","Data":"cd5cad05995d7270432fb8e8a812b00fd9960f00ff5a7591748fd237865e5bba"} Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.515270 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.546063 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.54604502 podStartE2EDuration="3.54604502s" podCreationTimestamp="2026-02-17 18:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:04:09.539876276 +0000 UTC m=+1179.090774955" watchObservedRunningTime="2026-02-17 18:04:09.54604502 +0000 UTC m=+1179.096943699" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.753348 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-7ghqn"] Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.754489 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7ghqn" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.771719 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7ghqn"] Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.854353 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2ae5-account-create-update-m7zrl"] Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.856074 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2ae5-account-create-update-m7zrl" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.858838 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.866160 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-rkvht"] Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.868140 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rkvht" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.869905 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lb77\" (UniqueName: \"kubernetes.io/projected/c035dac9-ced2-4d11-98fb-55b14332a2ac-kube-api-access-2lb77\") pod \"nova-api-db-create-7ghqn\" (UID: \"c035dac9-ced2-4d11-98fb-55b14332a2ac\") " pod="openstack/nova-api-db-create-7ghqn" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.870111 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c035dac9-ced2-4d11-98fb-55b14332a2ac-operator-scripts\") pod \"nova-api-db-create-7ghqn\" (UID: \"c035dac9-ced2-4d11-98fb-55b14332a2ac\") " pod="openstack/nova-api-db-create-7ghqn" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.875730 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2ae5-account-create-update-m7zrl"] Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.885784 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rkvht"] Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.955086 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-hdz75"] Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.956360 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hdz75" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.971874 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lb77\" (UniqueName: \"kubernetes.io/projected/c035dac9-ced2-4d11-98fb-55b14332a2ac-kube-api-access-2lb77\") pod \"nova-api-db-create-7ghqn\" (UID: \"c035dac9-ced2-4d11-98fb-55b14332a2ac\") " pod="openstack/nova-api-db-create-7ghqn" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.972105 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x68f8\" (UniqueName: \"kubernetes.io/projected/47c0ebcb-0014-4cc0-b660-536f4392d4d6-kube-api-access-x68f8\") pod \"nova-api-2ae5-account-create-update-m7zrl\" (UID: \"47c0ebcb-0014-4cc0-b660-536f4392d4d6\") " pod="openstack/nova-api-2ae5-account-create-update-m7zrl" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.972197 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c0ebcb-0014-4cc0-b660-536f4392d4d6-operator-scripts\") pod \"nova-api-2ae5-account-create-update-m7zrl\" (UID: \"47c0ebcb-0014-4cc0-b660-536f4392d4d6\") " pod="openstack/nova-api-2ae5-account-create-update-m7zrl" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.972284 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c035dac9-ced2-4d11-98fb-55b14332a2ac-operator-scripts\") pod \"nova-api-db-create-7ghqn\" (UID: \"c035dac9-ced2-4d11-98fb-55b14332a2ac\") " pod="openstack/nova-api-db-create-7ghqn" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.972406 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc7qr\" (UniqueName: \"kubernetes.io/projected/1a32553e-b714-453e-9686-a5d72cd3e3fd-kube-api-access-mc7qr\") pod \"nova-cell0-db-create-rkvht\" (UID: \"1a32553e-b714-453e-9686-a5d72cd3e3fd\") " pod="openstack/nova-cell0-db-create-rkvht" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.972532 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a32553e-b714-453e-9686-a5d72cd3e3fd-operator-scripts\") pod \"nova-cell0-db-create-rkvht\" (UID: \"1a32553e-b714-453e-9686-a5d72cd3e3fd\") " pod="openstack/nova-cell0-db-create-rkvht" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.973688 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c035dac9-ced2-4d11-98fb-55b14332a2ac-operator-scripts\") pod \"nova-api-db-create-7ghqn\" (UID: \"c035dac9-ced2-4d11-98fb-55b14332a2ac\") " pod="openstack/nova-api-db-create-7ghqn" Feb 17 18:04:09 crc kubenswrapper[4680]: I0217 18:04:09.981986 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hdz75"] Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.004259 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lb77\" (UniqueName: \"kubernetes.io/projected/c035dac9-ced2-4d11-98fb-55b14332a2ac-kube-api-access-2lb77\") pod \"nova-api-db-create-7ghqn\" (UID: \"c035dac9-ced2-4d11-98fb-55b14332a2ac\") " pod="openstack/nova-api-db-create-7ghqn" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.065368 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-d88b-account-create-update-t8n2f"] Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.067016 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d88b-account-create-update-t8n2f" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.068880 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.070121 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7ghqn" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.074001 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x68f8\" (UniqueName: \"kubernetes.io/projected/47c0ebcb-0014-4cc0-b660-536f4392d4d6-kube-api-access-x68f8\") pod \"nova-api-2ae5-account-create-update-m7zrl\" (UID: \"47c0ebcb-0014-4cc0-b660-536f4392d4d6\") " pod="openstack/nova-api-2ae5-account-create-update-m7zrl" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.074046 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c0ebcb-0014-4cc0-b660-536f4392d4d6-operator-scripts\") pod \"nova-api-2ae5-account-create-update-m7zrl\" (UID: \"47c0ebcb-0014-4cc0-b660-536f4392d4d6\") " pod="openstack/nova-api-2ae5-account-create-update-m7zrl" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.074091 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9534f780-22b4-4288-b8d5-3d80336a0e55-operator-scripts\") pod \"nova-cell1-db-create-hdz75\" (UID: \"9534f780-22b4-4288-b8d5-3d80336a0e55\") " pod="openstack/nova-cell1-db-create-hdz75" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.074115 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc7qr\" (UniqueName: \"kubernetes.io/projected/1a32553e-b714-453e-9686-a5d72cd3e3fd-kube-api-access-mc7qr\") pod \"nova-cell0-db-create-rkvht\" (UID: \"1a32553e-b714-453e-9686-a5d72cd3e3fd\") " pod="openstack/nova-cell0-db-create-rkvht" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.074144 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a32553e-b714-453e-9686-a5d72cd3e3fd-operator-scripts\") pod \"nova-cell0-db-create-rkvht\" (UID: \"1a32553e-b714-453e-9686-a5d72cd3e3fd\") " pod="openstack/nova-cell0-db-create-rkvht" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.074167 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj8p8\" (UniqueName: \"kubernetes.io/projected/9534f780-22b4-4288-b8d5-3d80336a0e55-kube-api-access-qj8p8\") pod \"nova-cell1-db-create-hdz75\" (UID: \"9534f780-22b4-4288-b8d5-3d80336a0e55\") " pod="openstack/nova-cell1-db-create-hdz75" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.075386 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c0ebcb-0014-4cc0-b660-536f4392d4d6-operator-scripts\") pod \"nova-api-2ae5-account-create-update-m7zrl\" (UID: \"47c0ebcb-0014-4cc0-b660-536f4392d4d6\") " pod="openstack/nova-api-2ae5-account-create-update-m7zrl" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.075948 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a32553e-b714-453e-9686-a5d72cd3e3fd-operator-scripts\") pod \"nova-cell0-db-create-rkvht\" (UID: \"1a32553e-b714-453e-9686-a5d72cd3e3fd\") " pod="openstack/nova-cell0-db-create-rkvht" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.086521 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d88b-account-create-update-t8n2f"] Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.101970 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc7qr\" (UniqueName: \"kubernetes.io/projected/1a32553e-b714-453e-9686-a5d72cd3e3fd-kube-api-access-mc7qr\") pod \"nova-cell0-db-create-rkvht\" (UID: \"1a32553e-b714-453e-9686-a5d72cd3e3fd\") " pod="openstack/nova-cell0-db-create-rkvht" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.118222 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x68f8\" (UniqueName: \"kubernetes.io/projected/47c0ebcb-0014-4cc0-b660-536f4392d4d6-kube-api-access-x68f8\") pod \"nova-api-2ae5-account-create-update-m7zrl\" (UID: \"47c0ebcb-0014-4cc0-b660-536f4392d4d6\") " pod="openstack/nova-api-2ae5-account-create-update-m7zrl" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.174389 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2ae5-account-create-update-m7zrl" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.176499 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9534f780-22b4-4288-b8d5-3d80336a0e55-operator-scripts\") pod \"nova-cell1-db-create-hdz75\" (UID: \"9534f780-22b4-4288-b8d5-3d80336a0e55\") " pod="openstack/nova-cell1-db-create-hdz75" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.177469 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj8p8\" (UniqueName: \"kubernetes.io/projected/9534f780-22b4-4288-b8d5-3d80336a0e55-kube-api-access-qj8p8\") pod \"nova-cell1-db-create-hdz75\" (UID: \"9534f780-22b4-4288-b8d5-3d80336a0e55\") " pod="openstack/nova-cell1-db-create-hdz75" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.177929 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd9bf62-1912-4923-a80f-5f4c5ba9d592-operator-scripts\") pod \"nova-cell0-d88b-account-create-update-t8n2f\" (UID: \"6fd9bf62-1912-4923-a80f-5f4c5ba9d592\") " pod="openstack/nova-cell0-d88b-account-create-update-t8n2f" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.178071 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kvg4\" (UniqueName: \"kubernetes.io/projected/6fd9bf62-1912-4923-a80f-5f4c5ba9d592-kube-api-access-4kvg4\") pod \"nova-cell0-d88b-account-create-update-t8n2f\" (UID: \"6fd9bf62-1912-4923-a80f-5f4c5ba9d592\") " pod="openstack/nova-cell0-d88b-account-create-update-t8n2f" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.177395 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9534f780-22b4-4288-b8d5-3d80336a0e55-operator-scripts\") pod \"nova-cell1-db-create-hdz75\" (UID: \"9534f780-22b4-4288-b8d5-3d80336a0e55\") " pod="openstack/nova-cell1-db-create-hdz75" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.215659 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rkvht" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.237726 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj8p8\" (UniqueName: \"kubernetes.io/projected/9534f780-22b4-4288-b8d5-3d80336a0e55-kube-api-access-qj8p8\") pod \"nova-cell1-db-create-hdz75\" (UID: \"9534f780-22b4-4288-b8d5-3d80336a0e55\") " pod="openstack/nova-cell1-db-create-hdz75" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.274300 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hdz75" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.285139 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd9bf62-1912-4923-a80f-5f4c5ba9d592-operator-scripts\") pod \"nova-cell0-d88b-account-create-update-t8n2f\" (UID: \"6fd9bf62-1912-4923-a80f-5f4c5ba9d592\") " pod="openstack/nova-cell0-d88b-account-create-update-t8n2f" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.285246 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kvg4\" (UniqueName: \"kubernetes.io/projected/6fd9bf62-1912-4923-a80f-5f4c5ba9d592-kube-api-access-4kvg4\") pod \"nova-cell0-d88b-account-create-update-t8n2f\" (UID: \"6fd9bf62-1912-4923-a80f-5f4c5ba9d592\") " pod="openstack/nova-cell0-d88b-account-create-update-t8n2f" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.285975 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd9bf62-1912-4923-a80f-5f4c5ba9d592-operator-scripts\") pod \"nova-cell0-d88b-account-create-update-t8n2f\" (UID: \"6fd9bf62-1912-4923-a80f-5f4c5ba9d592\") " pod="openstack/nova-cell0-d88b-account-create-update-t8n2f" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.303451 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8c2a-account-create-update-xfl7v"] Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.306159 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8c2a-account-create-update-xfl7v" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.316468 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.320358 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kvg4\" (UniqueName: \"kubernetes.io/projected/6fd9bf62-1912-4923-a80f-5f4c5ba9d592-kube-api-access-4kvg4\") pod \"nova-cell0-d88b-account-create-update-t8n2f\" (UID: \"6fd9bf62-1912-4923-a80f-5f4c5ba9d592\") " pod="openstack/nova-cell0-d88b-account-create-update-t8n2f" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.389582 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c975a37-6c0c-4126-9764-7e434c09cde1-operator-scripts\") pod \"nova-cell1-8c2a-account-create-update-xfl7v\" (UID: \"8c975a37-6c0c-4126-9764-7e434c09cde1\") " pod="openstack/nova-cell1-8c2a-account-create-update-xfl7v" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.389849 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rgtq\" (UniqueName: \"kubernetes.io/projected/8c975a37-6c0c-4126-9764-7e434c09cde1-kube-api-access-7rgtq\") pod \"nova-cell1-8c2a-account-create-update-xfl7v\" (UID: \"8c975a37-6c0c-4126-9764-7e434c09cde1\") " pod="openstack/nova-cell1-8c2a-account-create-update-xfl7v" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.393725 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d88b-account-create-update-t8n2f" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.485377 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8c2a-account-create-update-xfl7v"] Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.540823 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c975a37-6c0c-4126-9764-7e434c09cde1-operator-scripts\") pod \"nova-cell1-8c2a-account-create-update-xfl7v\" (UID: \"8c975a37-6c0c-4126-9764-7e434c09cde1\") " pod="openstack/nova-cell1-8c2a-account-create-update-xfl7v" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.540972 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rgtq\" (UniqueName: \"kubernetes.io/projected/8c975a37-6c0c-4126-9764-7e434c09cde1-kube-api-access-7rgtq\") pod \"nova-cell1-8c2a-account-create-update-xfl7v\" (UID: \"8c975a37-6c0c-4126-9764-7e434c09cde1\") " pod="openstack/nova-cell1-8c2a-account-create-update-xfl7v" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.543923 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c975a37-6c0c-4126-9764-7e434c09cde1-operator-scripts\") pod \"nova-cell1-8c2a-account-create-update-xfl7v\" (UID: \"8c975a37-6c0c-4126-9764-7e434c09cde1\") " pod="openstack/nova-cell1-8c2a-account-create-update-xfl7v" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.571800 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rgtq\" (UniqueName: \"kubernetes.io/projected/8c975a37-6c0c-4126-9764-7e434c09cde1-kube-api-access-7rgtq\") pod \"nova-cell1-8c2a-account-create-update-xfl7v\" (UID: \"8c975a37-6c0c-4126-9764-7e434c09cde1\") " pod="openstack/nova-cell1-8c2a-account-create-update-xfl7v" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.657914 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7ghqn"] Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.776759 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8c2a-account-create-update-xfl7v" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.900273 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="e60fee03-af47-4ec5-b656-06b344e31568" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.165:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 18:04:10 crc kubenswrapper[4680]: I0217 18:04:10.973026 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2ae5-account-create-update-m7zrl"] Feb 17 18:04:11 crc kubenswrapper[4680]: I0217 18:04:11.067914 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rkvht"] Feb 17 18:04:11 crc kubenswrapper[4680]: I0217 18:04:11.085008 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hdz75"] Feb 17 18:04:11 crc kubenswrapper[4680]: W0217 18:04:11.114135 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a32553e_b714_453e_9686_a5d72cd3e3fd.slice/crio-8b48101fcd658fdedacf8600084f13793e92282a0e7ef36474e721adc57c02c6 WatchSource:0}: Error finding container 8b48101fcd658fdedacf8600084f13793e92282a0e7ef36474e721adc57c02c6: Status 404 returned error can't find the container with id 8b48101fcd658fdedacf8600084f13793e92282a0e7ef36474e721adc57c02c6 Feb 17 18:04:11 crc kubenswrapper[4680]: I0217 18:04:11.227496 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d88b-account-create-update-t8n2f"] Feb 17 18:04:11 crc kubenswrapper[4680]: I0217 18:04:11.391665 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8c2a-account-create-update-xfl7v"] Feb 17 18:04:11 crc kubenswrapper[4680]: W0217 18:04:11.410103 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c975a37_6c0c_4126_9764_7e434c09cde1.slice/crio-b183d43e4145dbcda9fee21e0580e74e4bc77da3ce91447796252c83b4ac1595 WatchSource:0}: Error finding container b183d43e4145dbcda9fee21e0580e74e4bc77da3ce91447796252c83b4ac1595: Status 404 returned error can't find the container with id b183d43e4145dbcda9fee21e0580e74e4bc77da3ce91447796252c83b4ac1595 Feb 17 18:04:11 crc kubenswrapper[4680]: I0217 18:04:11.601544 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2ae5-account-create-update-m7zrl" event={"ID":"47c0ebcb-0014-4cc0-b660-536f4392d4d6","Type":"ContainerStarted","Data":"944c720d40e5ed54055d7d1e89f0a31214dba259a34b6ac5b0490d36fad5415b"} Feb 17 18:04:11 crc kubenswrapper[4680]: I0217 18:04:11.601589 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2ae5-account-create-update-m7zrl" event={"ID":"47c0ebcb-0014-4cc0-b660-536f4392d4d6","Type":"ContainerStarted","Data":"2c24094e760961c9e77749c35612e259a6284537dfe5a9c0748ebeb9ac11ea72"} Feb 17 18:04:11 crc kubenswrapper[4680]: I0217 18:04:11.610070 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rkvht" event={"ID":"1a32553e-b714-453e-9686-a5d72cd3e3fd","Type":"ContainerStarted","Data":"8b48101fcd658fdedacf8600084f13793e92282a0e7ef36474e721adc57c02c6"} Feb 17 18:04:11 crc kubenswrapper[4680]: I0217 18:04:11.611950 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d88b-account-create-update-t8n2f" event={"ID":"6fd9bf62-1912-4923-a80f-5f4c5ba9d592","Type":"ContainerStarted","Data":"dc6c05045bc024f89f587d6c34447a59e630eb05e8a9af6b02ff72dd55eb16d3"} Feb 17 18:04:11 crc kubenswrapper[4680]: I0217 18:04:11.622147 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8c2a-account-create-update-xfl7v" event={"ID":"8c975a37-6c0c-4126-9764-7e434c09cde1","Type":"ContainerStarted","Data":"b183d43e4145dbcda9fee21e0580e74e4bc77da3ce91447796252c83b4ac1595"} Feb 17 18:04:11 crc kubenswrapper[4680]: I0217 18:04:11.624003 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hdz75" event={"ID":"9534f780-22b4-4288-b8d5-3d80336a0e55","Type":"ContainerStarted","Data":"cc733f3e54fd482cb4f0c3a36fbb57b3a6c778e02120888d9fbb84ed5f3f5065"} Feb 17 18:04:11 crc kubenswrapper[4680]: I0217 18:04:11.632613 4680 generic.go:334] "Generic (PLEG): container finished" podID="c035dac9-ced2-4d11-98fb-55b14332a2ac" containerID="accd89a62d0663067098553fd55a8f66d68d5c6f0d94a73c5732fe462541e9a2" exitCode=0 Feb 17 18:04:11 crc kubenswrapper[4680]: I0217 18:04:11.632685 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7ghqn" event={"ID":"c035dac9-ced2-4d11-98fb-55b14332a2ac","Type":"ContainerDied","Data":"accd89a62d0663067098553fd55a8f66d68d5c6f0d94a73c5732fe462541e9a2"} Feb 17 18:04:11 crc kubenswrapper[4680]: I0217 18:04:11.632844 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7ghqn" event={"ID":"c035dac9-ced2-4d11-98fb-55b14332a2ac","Type":"ContainerStarted","Data":"b10f629dcee51612a4955e2f7030b1672d6644325a2a8f0ce0e5d44b346c58dd"} Feb 17 18:04:11 crc kubenswrapper[4680]: I0217 18:04:11.651583 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2ae5-account-create-update-m7zrl" podStartSLOduration=2.651567064 podStartE2EDuration="2.651567064s" podCreationTimestamp="2026-02-17 18:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:04:11.619002832 +0000 UTC m=+1181.169901511" watchObservedRunningTime="2026-02-17 18:04:11.651567064 +0000 UTC m=+1181.202465743" Feb 17 18:04:12 crc kubenswrapper[4680]: I0217 18:04:12.642405 4680 generic.go:334] "Generic (PLEG): container finished" podID="6fd9bf62-1912-4923-a80f-5f4c5ba9d592" containerID="c3aacbbfcb27ed9154b954cf371ceba9ed735c1b3573e9716751f20193f5bcd8" exitCode=0 Feb 17 18:04:12 crc kubenswrapper[4680]: I0217 18:04:12.642463 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d88b-account-create-update-t8n2f" event={"ID":"6fd9bf62-1912-4923-a80f-5f4c5ba9d592","Type":"ContainerDied","Data":"c3aacbbfcb27ed9154b954cf371ceba9ed735c1b3573e9716751f20193f5bcd8"} Feb 17 18:04:12 crc kubenswrapper[4680]: I0217 18:04:12.644918 4680 generic.go:334] "Generic (PLEG): container finished" podID="8c975a37-6c0c-4126-9764-7e434c09cde1" containerID="3d980b326af1db60c2d541bf447cf19cd73ebe345d411697060d326546adb18c" exitCode=0 Feb 17 18:04:12 crc kubenswrapper[4680]: I0217 18:04:12.644975 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8c2a-account-create-update-xfl7v" event={"ID":"8c975a37-6c0c-4126-9764-7e434c09cde1","Type":"ContainerDied","Data":"3d980b326af1db60c2d541bf447cf19cd73ebe345d411697060d326546adb18c"} Feb 17 18:04:12 crc kubenswrapper[4680]: I0217 18:04:12.647770 4680 generic.go:334] "Generic (PLEG): container finished" podID="9534f780-22b4-4288-b8d5-3d80336a0e55" containerID="983a25ec9a3dbdfe8e5f42b33b3ffcb104560d7399ebd08b638d0acadfd471e5" exitCode=0 Feb 17 18:04:12 crc kubenswrapper[4680]: I0217 18:04:12.647818 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hdz75" event={"ID":"9534f780-22b4-4288-b8d5-3d80336a0e55","Type":"ContainerDied","Data":"983a25ec9a3dbdfe8e5f42b33b3ffcb104560d7399ebd08b638d0acadfd471e5"} Feb 17 18:04:12 crc kubenswrapper[4680]: I0217 18:04:12.649619 4680 generic.go:334] "Generic (PLEG): container finished" podID="47c0ebcb-0014-4cc0-b660-536f4392d4d6" containerID="944c720d40e5ed54055d7d1e89f0a31214dba259a34b6ac5b0490d36fad5415b" exitCode=0 Feb 17 18:04:12 crc kubenswrapper[4680]: I0217 18:04:12.649679 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2ae5-account-create-update-m7zrl" event={"ID":"47c0ebcb-0014-4cc0-b660-536f4392d4d6","Type":"ContainerDied","Data":"944c720d40e5ed54055d7d1e89f0a31214dba259a34b6ac5b0490d36fad5415b"} Feb 17 18:04:12 crc kubenswrapper[4680]: I0217 18:04:12.655017 4680 generic.go:334] "Generic (PLEG): container finished" podID="1a32553e-b714-453e-9686-a5d72cd3e3fd" containerID="b3cafea384aaff83bdde53a1768a7773e54b25f1d033188476f9e0a75ce5ef65" exitCode=0 Feb 17 18:04:12 crc kubenswrapper[4680]: I0217 18:04:12.655143 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rkvht" event={"ID":"1a32553e-b714-453e-9686-a5d72cd3e3fd","Type":"ContainerDied","Data":"b3cafea384aaff83bdde53a1768a7773e54b25f1d033188476f9e0a75ce5ef65"} Feb 17 18:04:12 crc kubenswrapper[4680]: I0217 18:04:12.718405 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 18:04:12 crc kubenswrapper[4680]: I0217 18:04:12.718714 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7a9d1eec-3c1b-405d-91d8-d76e5233e223" containerName="glance-log" containerID="cri-o://4b0d6ce47086ad55b4ef743d0232ba8f0f05cf9455ca4bb43c008f14ce05c4cc" gracePeriod=30 Feb 17 18:04:12 crc kubenswrapper[4680]: I0217 18:04:12.718894 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7a9d1eec-3c1b-405d-91d8-d76e5233e223" containerName="glance-httpd" containerID="cri-o://ed5ffd776e9ece6e132f39e2a1dcb7f02dc73a7b1dececc030f9aaa6aa149e4f" gracePeriod=30 Feb 17 18:04:13 crc kubenswrapper[4680]: I0217 18:04:13.128573 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7ghqn" Feb 17 18:04:13 crc kubenswrapper[4680]: I0217 18:04:13.240787 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lb77\" (UniqueName: \"kubernetes.io/projected/c035dac9-ced2-4d11-98fb-55b14332a2ac-kube-api-access-2lb77\") pod \"c035dac9-ced2-4d11-98fb-55b14332a2ac\" (UID: \"c035dac9-ced2-4d11-98fb-55b14332a2ac\") " Feb 17 18:04:13 crc kubenswrapper[4680]: I0217 18:04:13.241048 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c035dac9-ced2-4d11-98fb-55b14332a2ac-operator-scripts\") pod \"c035dac9-ced2-4d11-98fb-55b14332a2ac\" (UID: \"c035dac9-ced2-4d11-98fb-55b14332a2ac\") " Feb 17 18:04:13 crc kubenswrapper[4680]: I0217 18:04:13.241797 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c035dac9-ced2-4d11-98fb-55b14332a2ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c035dac9-ced2-4d11-98fb-55b14332a2ac" (UID: "c035dac9-ced2-4d11-98fb-55b14332a2ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:04:13 crc kubenswrapper[4680]: I0217 18:04:13.253603 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c035dac9-ced2-4d11-98fb-55b14332a2ac-kube-api-access-2lb77" (OuterVolumeSpecName: "kube-api-access-2lb77") pod "c035dac9-ced2-4d11-98fb-55b14332a2ac" (UID: "c035dac9-ced2-4d11-98fb-55b14332a2ac"). InnerVolumeSpecName "kube-api-access-2lb77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:04:13 crc kubenswrapper[4680]: I0217 18:04:13.343571 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lb77\" (UniqueName: \"kubernetes.io/projected/c035dac9-ced2-4d11-98fb-55b14332a2ac-kube-api-access-2lb77\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:13 crc kubenswrapper[4680]: I0217 18:04:13.343606 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c035dac9-ced2-4d11-98fb-55b14332a2ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:13 crc kubenswrapper[4680]: I0217 18:04:13.664636 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7ghqn" event={"ID":"c035dac9-ced2-4d11-98fb-55b14332a2ac","Type":"ContainerDied","Data":"b10f629dcee51612a4955e2f7030b1672d6644325a2a8f0ce0e5d44b346c58dd"} Feb 17 18:04:13 crc kubenswrapper[4680]: I0217 18:04:13.665007 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b10f629dcee51612a4955e2f7030b1672d6644325a2a8f0ce0e5d44b346c58dd" Feb 17 18:04:13 crc kubenswrapper[4680]: I0217 18:04:13.664655 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7ghqn" Feb 17 18:04:13 crc kubenswrapper[4680]: I0217 18:04:13.667939 4680 generic.go:334] "Generic (PLEG): container finished" podID="7a9d1eec-3c1b-405d-91d8-d76e5233e223" containerID="4b0d6ce47086ad55b4ef743d0232ba8f0f05cf9455ca4bb43c008f14ce05c4cc" exitCode=143 Feb 17 18:04:13 crc kubenswrapper[4680]: I0217 18:04:13.668215 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a9d1eec-3c1b-405d-91d8-d76e5233e223","Type":"ContainerDied","Data":"4b0d6ce47086ad55b4ef743d0232ba8f0f05cf9455ca4bb43c008f14ce05c4cc"} Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.396750 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rkvht" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.462465 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc7qr\" (UniqueName: \"kubernetes.io/projected/1a32553e-b714-453e-9686-a5d72cd3e3fd-kube-api-access-mc7qr\") pod \"1a32553e-b714-453e-9686-a5d72cd3e3fd\" (UID: \"1a32553e-b714-453e-9686-a5d72cd3e3fd\") " Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.462739 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a32553e-b714-453e-9686-a5d72cd3e3fd-operator-scripts\") pod \"1a32553e-b714-453e-9686-a5d72cd3e3fd\" (UID: \"1a32553e-b714-453e-9686-a5d72cd3e3fd\") " Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.463842 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a32553e-b714-453e-9686-a5d72cd3e3fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a32553e-b714-453e-9686-a5d72cd3e3fd" (UID: "1a32553e-b714-453e-9686-a5d72cd3e3fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.504045 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a32553e-b714-453e-9686-a5d72cd3e3fd-kube-api-access-mc7qr" (OuterVolumeSpecName: "kube-api-access-mc7qr") pod "1a32553e-b714-453e-9686-a5d72cd3e3fd" (UID: "1a32553e-b714-453e-9686-a5d72cd3e3fd"). InnerVolumeSpecName "kube-api-access-mc7qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.597458 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc7qr\" (UniqueName: \"kubernetes.io/projected/1a32553e-b714-453e-9686-a5d72cd3e3fd-kube-api-access-mc7qr\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.597500 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a32553e-b714-453e-9686-a5d72cd3e3fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.678873 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2ae5-account-create-update-m7zrl" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.709770 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2ae5-account-create-update-m7zrl" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.709785 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2ae5-account-create-update-m7zrl" event={"ID":"47c0ebcb-0014-4cc0-b660-536f4392d4d6","Type":"ContainerDied","Data":"2c24094e760961c9e77749c35612e259a6284537dfe5a9c0748ebeb9ac11ea72"} Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.709840 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c24094e760961c9e77749c35612e259a6284537dfe5a9c0748ebeb9ac11ea72" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.715654 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d88b-account-create-update-t8n2f" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.717132 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rkvht" event={"ID":"1a32553e-b714-453e-9686-a5d72cd3e3fd","Type":"ContainerDied","Data":"8b48101fcd658fdedacf8600084f13793e92282a0e7ef36474e721adc57c02c6"} Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.717179 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b48101fcd658fdedacf8600084f13793e92282a0e7ef36474e721adc57c02c6" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.717218 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rkvht" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.726575 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hdz75" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.749831 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8c2a-account-create-update-xfl7v" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.801301 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c0ebcb-0014-4cc0-b660-536f4392d4d6-operator-scripts\") pod \"47c0ebcb-0014-4cc0-b660-536f4392d4d6\" (UID: \"47c0ebcb-0014-4cc0-b660-536f4392d4d6\") " Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.801387 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x68f8\" (UniqueName: \"kubernetes.io/projected/47c0ebcb-0014-4cc0-b660-536f4392d4d6-kube-api-access-x68f8\") pod \"47c0ebcb-0014-4cc0-b660-536f4392d4d6\" (UID: \"47c0ebcb-0014-4cc0-b660-536f4392d4d6\") " Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.801872 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47c0ebcb-0014-4cc0-b660-536f4392d4d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47c0ebcb-0014-4cc0-b660-536f4392d4d6" (UID: "47c0ebcb-0014-4cc0-b660-536f4392d4d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.805513 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c0ebcb-0014-4cc0-b660-536f4392d4d6-kube-api-access-x68f8" (OuterVolumeSpecName: "kube-api-access-x68f8") pod "47c0ebcb-0014-4cc0-b660-536f4392d4d6" (UID: "47c0ebcb-0014-4cc0-b660-536f4392d4d6"). InnerVolumeSpecName "kube-api-access-x68f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.903152 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kvg4\" (UniqueName: \"kubernetes.io/projected/6fd9bf62-1912-4923-a80f-5f4c5ba9d592-kube-api-access-4kvg4\") pod \"6fd9bf62-1912-4923-a80f-5f4c5ba9d592\" (UID: \"6fd9bf62-1912-4923-a80f-5f4c5ba9d592\") " Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.903330 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9534f780-22b4-4288-b8d5-3d80336a0e55-operator-scripts\") pod \"9534f780-22b4-4288-b8d5-3d80336a0e55\" (UID: \"9534f780-22b4-4288-b8d5-3d80336a0e55\") " Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.903365 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rgtq\" (UniqueName: \"kubernetes.io/projected/8c975a37-6c0c-4126-9764-7e434c09cde1-kube-api-access-7rgtq\") pod \"8c975a37-6c0c-4126-9764-7e434c09cde1\" (UID: \"8c975a37-6c0c-4126-9764-7e434c09cde1\") " Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.903426 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c975a37-6c0c-4126-9764-7e434c09cde1-operator-scripts\") pod \"8c975a37-6c0c-4126-9764-7e434c09cde1\" (UID: \"8c975a37-6c0c-4126-9764-7e434c09cde1\") " Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.903502 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj8p8\" (UniqueName: \"kubernetes.io/projected/9534f780-22b4-4288-b8d5-3d80336a0e55-kube-api-access-qj8p8\") pod \"9534f780-22b4-4288-b8d5-3d80336a0e55\" (UID: \"9534f780-22b4-4288-b8d5-3d80336a0e55\") " Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.903545 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd9bf62-1912-4923-a80f-5f4c5ba9d592-operator-scripts\") pod \"6fd9bf62-1912-4923-a80f-5f4c5ba9d592\" (UID: \"6fd9bf62-1912-4923-a80f-5f4c5ba9d592\") " Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.903812 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9534f780-22b4-4288-b8d5-3d80336a0e55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9534f780-22b4-4288-b8d5-3d80336a0e55" (UID: "9534f780-22b4-4288-b8d5-3d80336a0e55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.904108 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c0ebcb-0014-4cc0-b660-536f4392d4d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.904134 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x68f8\" (UniqueName: \"kubernetes.io/projected/47c0ebcb-0014-4cc0-b660-536f4392d4d6-kube-api-access-x68f8\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.904147 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9534f780-22b4-4288-b8d5-3d80336a0e55-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.904360 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c975a37-6c0c-4126-9764-7e434c09cde1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c975a37-6c0c-4126-9764-7e434c09cde1" (UID: "8c975a37-6c0c-4126-9764-7e434c09cde1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.904622 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd9bf62-1912-4923-a80f-5f4c5ba9d592-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fd9bf62-1912-4923-a80f-5f4c5ba9d592" (UID: "6fd9bf62-1912-4923-a80f-5f4c5ba9d592"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.910632 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd9bf62-1912-4923-a80f-5f4c5ba9d592-kube-api-access-4kvg4" (OuterVolumeSpecName: "kube-api-access-4kvg4") pod "6fd9bf62-1912-4923-a80f-5f4c5ba9d592" (UID: "6fd9bf62-1912-4923-a80f-5f4c5ba9d592"). InnerVolumeSpecName "kube-api-access-4kvg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.912614 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c975a37-6c0c-4126-9764-7e434c09cde1-kube-api-access-7rgtq" (OuterVolumeSpecName: "kube-api-access-7rgtq") pod "8c975a37-6c0c-4126-9764-7e434c09cde1" (UID: "8c975a37-6c0c-4126-9764-7e434c09cde1"). InnerVolumeSpecName "kube-api-access-7rgtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:04:14 crc kubenswrapper[4680]: I0217 18:04:14.912691 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9534f780-22b4-4288-b8d5-3d80336a0e55-kube-api-access-qj8p8" (OuterVolumeSpecName: "kube-api-access-qj8p8") pod "9534f780-22b4-4288-b8d5-3d80336a0e55" (UID: "9534f780-22b4-4288-b8d5-3d80336a0e55"). InnerVolumeSpecName "kube-api-access-qj8p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.005719 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj8p8\" (UniqueName: \"kubernetes.io/projected/9534f780-22b4-4288-b8d5-3d80336a0e55-kube-api-access-qj8p8\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.005755 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd9bf62-1912-4923-a80f-5f4c5ba9d592-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.005766 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kvg4\" (UniqueName: \"kubernetes.io/projected/6fd9bf62-1912-4923-a80f-5f4c5ba9d592-kube-api-access-4kvg4\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.005777 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rgtq\" (UniqueName: \"kubernetes.io/projected/8c975a37-6c0c-4126-9764-7e434c09cde1-kube-api-access-7rgtq\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.005787 4680 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c975a37-6c0c-4126-9764-7e434c09cde1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.043212 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.043526 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="95585ae8-c7e0-415d-b239-3c078e4d0a7e" containerName="glance-log" containerID="cri-o://782e545fb1f32b809f690585710f6b36072eddbf5a37bb9b577aae0fe02d7cdb" gracePeriod=30 Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.043806 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="95585ae8-c7e0-415d-b239-3c078e4d0a7e" containerName="glance-httpd" containerID="cri-o://67d8c6a37e4e05cc774793d5c9349a72df97eb750ef23b13f24875fcbfbf089a" gracePeriod=30 Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.728610 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d88b-account-create-update-t8n2f" event={"ID":"6fd9bf62-1912-4923-a80f-5f4c5ba9d592","Type":"ContainerDied","Data":"dc6c05045bc024f89f587d6c34447a59e630eb05e8a9af6b02ff72dd55eb16d3"} Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.729567 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc6c05045bc024f89f587d6c34447a59e630eb05e8a9af6b02ff72dd55eb16d3" Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.728660 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d88b-account-create-update-t8n2f" Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.734782 4680 generic.go:334] "Generic (PLEG): container finished" podID="95585ae8-c7e0-415d-b239-3c078e4d0a7e" containerID="782e545fb1f32b809f690585710f6b36072eddbf5a37bb9b577aae0fe02d7cdb" exitCode=143 Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.734860 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95585ae8-c7e0-415d-b239-3c078e4d0a7e","Type":"ContainerDied","Data":"782e545fb1f32b809f690585710f6b36072eddbf5a37bb9b577aae0fe02d7cdb"} Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.737292 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8c2a-account-create-update-xfl7v" event={"ID":"8c975a37-6c0c-4126-9764-7e434c09cde1","Type":"ContainerDied","Data":"b183d43e4145dbcda9fee21e0580e74e4bc77da3ce91447796252c83b4ac1595"} Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.737363 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b183d43e4145dbcda9fee21e0580e74e4bc77da3ce91447796252c83b4ac1595" Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.737455 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8c2a-account-create-update-xfl7v" Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.740337 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hdz75" Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.741076 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hdz75" event={"ID":"9534f780-22b4-4288-b8d5-3d80336a0e55","Type":"ContainerDied","Data":"cc733f3e54fd482cb4f0c3a36fbb57b3a6c778e02120888d9fbb84ed5f3f5065"} Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.741103 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc733f3e54fd482cb4f0c3a36fbb57b3a6c778e02120888d9fbb84ed5f3f5065" Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.921159 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="7a9d1eec-3c1b-405d-91d8-d76e5233e223" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:42594->10.217.0.152:9292: read: connection reset by peer" Feb 17 18:04:15 crc kubenswrapper[4680]: I0217 18:04:15.921589 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="7a9d1eec-3c1b-405d-91d8-d76e5233e223" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:42586->10.217.0.152:9292: read: connection reset by peer" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.473983 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.636062 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.636128 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-combined-ca-bundle\") pod \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.636263 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a9d1eec-3c1b-405d-91d8-d76e5233e223-httpd-run\") pod \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.636294 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-scripts\") pod \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.636341 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-config-data\") pod \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.636376 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-public-tls-certs\") pod \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.636464 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ddbc\" (UniqueName: \"kubernetes.io/projected/7a9d1eec-3c1b-405d-91d8-d76e5233e223-kube-api-access-8ddbc\") pod \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.636486 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a9d1eec-3c1b-405d-91d8-d76e5233e223-logs\") pod \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\" (UID: \"7a9d1eec-3c1b-405d-91d8-d76e5233e223\") " Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.636645 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a9d1eec-3c1b-405d-91d8-d76e5233e223-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7a9d1eec-3c1b-405d-91d8-d76e5233e223" (UID: "7a9d1eec-3c1b-405d-91d8-d76e5233e223"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.636953 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a9d1eec-3c1b-405d-91d8-d76e5233e223-logs" (OuterVolumeSpecName: "logs") pod "7a9d1eec-3c1b-405d-91d8-d76e5233e223" (UID: "7a9d1eec-3c1b-405d-91d8-d76e5233e223"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.637066 4680 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a9d1eec-3c1b-405d-91d8-d76e5233e223-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.642646 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "7a9d1eec-3c1b-405d-91d8-d76e5233e223" (UID: "7a9d1eec-3c1b-405d-91d8-d76e5233e223"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.645835 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9d1eec-3c1b-405d-91d8-d76e5233e223-kube-api-access-8ddbc" (OuterVolumeSpecName: "kube-api-access-8ddbc") pod "7a9d1eec-3c1b-405d-91d8-d76e5233e223" (UID: "7a9d1eec-3c1b-405d-91d8-d76e5233e223"). InnerVolumeSpecName "kube-api-access-8ddbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.672773 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-scripts" (OuterVolumeSpecName: "scripts") pod "7a9d1eec-3c1b-405d-91d8-d76e5233e223" (UID: "7a9d1eec-3c1b-405d-91d8-d76e5233e223"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.701796 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a9d1eec-3c1b-405d-91d8-d76e5233e223" (UID: "7a9d1eec-3c1b-405d-91d8-d76e5233e223"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.733976 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-config-data" (OuterVolumeSpecName: "config-data") pod "7a9d1eec-3c1b-405d-91d8-d76e5233e223" (UID: "7a9d1eec-3c1b-405d-91d8-d76e5233e223"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.743956 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.744001 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.744019 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ddbc\" (UniqueName: \"kubernetes.io/projected/7a9d1eec-3c1b-405d-91d8-d76e5233e223-kube-api-access-8ddbc\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.744034 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a9d1eec-3c1b-405d-91d8-d76e5233e223-logs\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.744066 4680 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.744084 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.764062 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7a9d1eec-3c1b-405d-91d8-d76e5233e223" (UID: "7a9d1eec-3c1b-405d-91d8-d76e5233e223"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.770828 4680 generic.go:334] "Generic (PLEG): container finished" podID="7a9d1eec-3c1b-405d-91d8-d76e5233e223" containerID="ed5ffd776e9ece6e132f39e2a1dcb7f02dc73a7b1dececc030f9aaa6aa149e4f" exitCode=0 Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.770886 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a9d1eec-3c1b-405d-91d8-d76e5233e223","Type":"ContainerDied","Data":"ed5ffd776e9ece6e132f39e2a1dcb7f02dc73a7b1dececc030f9aaa6aa149e4f"} Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.770913 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a9d1eec-3c1b-405d-91d8-d76e5233e223","Type":"ContainerDied","Data":"7843243fe2f15eda2497ca1bb69b688ec71083f2dc81e79aeddc842c4db08b02"} Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.770945 4680 scope.go:117] "RemoveContainer" containerID="ed5ffd776e9ece6e132f39e2a1dcb7f02dc73a7b1dececc030f9aaa6aa149e4f" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.771143 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.793738 4680 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.838753 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.849417 4680 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.849477 4680 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a9d1eec-3c1b-405d-91d8-d76e5233e223-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.883451 4680 scope.go:117] "RemoveContainer" containerID="4b0d6ce47086ad55b4ef743d0232ba8f0f05cf9455ca4bb43c008f14ce05c4cc" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.893194 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.902965 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 18:04:16 crc kubenswrapper[4680]: E0217 18:04:16.903411 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9d1eec-3c1b-405d-91d8-d76e5233e223" containerName="glance-log" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.903427 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9d1eec-3c1b-405d-91d8-d76e5233e223" containerName="glance-log" Feb 17 18:04:16 crc kubenswrapper[4680]: E0217 18:04:16.903441 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c975a37-6c0c-4126-9764-7e434c09cde1" containerName="mariadb-account-create-update" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.903449 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c975a37-6c0c-4126-9764-7e434c09cde1" containerName="mariadb-account-create-update" Feb 17 18:04:16 crc kubenswrapper[4680]: E0217 18:04:16.903458 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9534f780-22b4-4288-b8d5-3d80336a0e55" containerName="mariadb-database-create" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.903464 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9534f780-22b4-4288-b8d5-3d80336a0e55" containerName="mariadb-database-create" Feb 17 18:04:16 crc kubenswrapper[4680]: E0217 18:04:16.903481 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd9bf62-1912-4923-a80f-5f4c5ba9d592" containerName="mariadb-account-create-update" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.903487 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd9bf62-1912-4923-a80f-5f4c5ba9d592" containerName="mariadb-account-create-update" Feb 17 18:04:16 crc kubenswrapper[4680]: E0217 18:04:16.903499 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9d1eec-3c1b-405d-91d8-d76e5233e223" containerName="glance-httpd" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.903504 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9d1eec-3c1b-405d-91d8-d76e5233e223" containerName="glance-httpd" Feb 17 18:04:16 crc kubenswrapper[4680]: E0217 18:04:16.903517 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a32553e-b714-453e-9686-a5d72cd3e3fd" containerName="mariadb-database-create" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.903524 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a32553e-b714-453e-9686-a5d72cd3e3fd" containerName="mariadb-database-create" Feb 17 18:04:16 crc kubenswrapper[4680]: E0217 18:04:16.903534 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c035dac9-ced2-4d11-98fb-55b14332a2ac" containerName="mariadb-database-create" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.903540 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c035dac9-ced2-4d11-98fb-55b14332a2ac" containerName="mariadb-database-create" Feb 17 18:04:16 crc kubenswrapper[4680]: E0217 18:04:16.903546 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c0ebcb-0014-4cc0-b660-536f4392d4d6" containerName="mariadb-account-create-update" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.903553 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c0ebcb-0014-4cc0-b660-536f4392d4d6" containerName="mariadb-account-create-update" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.903761 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="47c0ebcb-0014-4cc0-b660-536f4392d4d6" containerName="mariadb-account-create-update" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.903776 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="c035dac9-ced2-4d11-98fb-55b14332a2ac" containerName="mariadb-database-create" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.903789 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c975a37-6c0c-4126-9764-7e434c09cde1" containerName="mariadb-account-create-update" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.903799 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a32553e-b714-453e-9686-a5d72cd3e3fd" containerName="mariadb-database-create" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.903810 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9d1eec-3c1b-405d-91d8-d76e5233e223" containerName="glance-log" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.903821 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd9bf62-1912-4923-a80f-5f4c5ba9d592" containerName="mariadb-account-create-update" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.903829 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="9534f780-22b4-4288-b8d5-3d80336a0e55" containerName="mariadb-database-create" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.903839 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9d1eec-3c1b-405d-91d8-d76e5233e223" containerName="glance-httpd" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.904815 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.907931 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.908143 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.916354 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.951052 4680 scope.go:117] "RemoveContainer" containerID="ed5ffd776e9ece6e132f39e2a1dcb7f02dc73a7b1dececc030f9aaa6aa149e4f" Feb 17 18:04:16 crc kubenswrapper[4680]: E0217 18:04:16.951418 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed5ffd776e9ece6e132f39e2a1dcb7f02dc73a7b1dececc030f9aaa6aa149e4f\": container with ID starting with ed5ffd776e9ece6e132f39e2a1dcb7f02dc73a7b1dececc030f9aaa6aa149e4f not found: ID does not exist" containerID="ed5ffd776e9ece6e132f39e2a1dcb7f02dc73a7b1dececc030f9aaa6aa149e4f" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.951450 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed5ffd776e9ece6e132f39e2a1dcb7f02dc73a7b1dececc030f9aaa6aa149e4f"} err="failed to get container status \"ed5ffd776e9ece6e132f39e2a1dcb7f02dc73a7b1dececc030f9aaa6aa149e4f\": rpc error: code = NotFound desc = could not find container \"ed5ffd776e9ece6e132f39e2a1dcb7f02dc73a7b1dececc030f9aaa6aa149e4f\": container with ID starting with ed5ffd776e9ece6e132f39e2a1dcb7f02dc73a7b1dececc030f9aaa6aa149e4f not found: ID does not exist" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.951477 4680 scope.go:117] "RemoveContainer" containerID="4b0d6ce47086ad55b4ef743d0232ba8f0f05cf9455ca4bb43c008f14ce05c4cc" Feb 17 18:04:16 crc kubenswrapper[4680]: E0217 18:04:16.951713 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b0d6ce47086ad55b4ef743d0232ba8f0f05cf9455ca4bb43c008f14ce05c4cc\": container with ID starting with 4b0d6ce47086ad55b4ef743d0232ba8f0f05cf9455ca4bb43c008f14ce05c4cc not found: ID does not exist" containerID="4b0d6ce47086ad55b4ef743d0232ba8f0f05cf9455ca4bb43c008f14ce05c4cc" Feb 17 18:04:16 crc kubenswrapper[4680]: I0217 18:04:16.951737 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b0d6ce47086ad55b4ef743d0232ba8f0f05cf9455ca4bb43c008f14ce05c4cc"} err="failed to get container status \"4b0d6ce47086ad55b4ef743d0232ba8f0f05cf9455ca4bb43c008f14ce05c4cc\": rpc error: code = NotFound desc = could not find container \"4b0d6ce47086ad55b4ef743d0232ba8f0f05cf9455ca4bb43c008f14ce05c4cc\": container with ID starting with 4b0d6ce47086ad55b4ef743d0232ba8f0f05cf9455ca4bb43c008f14ce05c4cc not found: ID does not exist" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.052104 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.052234 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-config-data\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.052296 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-logs\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.052463 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.052729 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.052779 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rlfg\" (UniqueName: \"kubernetes.io/projected/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-kube-api-access-2rlfg\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.052813 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.052849 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-scripts\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.115257 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a9d1eec-3c1b-405d-91d8-d76e5233e223" path="/var/lib/kubelet/pods/7a9d1eec-3c1b-405d-91d8-d76e5233e223/volumes" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.154334 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.154408 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-config-data\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.154434 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-logs\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.154464 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.154543 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.154568 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rlfg\" (UniqueName: \"kubernetes.io/projected/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-kube-api-access-2rlfg\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.154589 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.154610 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-scripts\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.154980 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-logs\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.155233 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.155246 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.158524 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.160255 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-config-data\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.161904 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-scripts\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.162950 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.178543 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rlfg\" (UniqueName: \"kubernetes.io/projected/4d07a220-fd50-489d-9df4-ba9ed7ce2c95-kube-api-access-2rlfg\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.195911 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"4d07a220-fd50-489d-9df4-ba9ed7ce2c95\") " pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.237753 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 18:04:17 crc kubenswrapper[4680]: I0217 18:04:17.957452 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 18:04:17 crc kubenswrapper[4680]: W0217 18:04:17.976531 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d07a220_fd50_489d_9df4_ba9ed7ce2c95.slice/crio-6c01aac4fe689f75280c369b72844e718bbcda71d9d1d81b2a2cf9202f13def0 WatchSource:0}: Error finding container 6c01aac4fe689f75280c369b72844e718bbcda71d9d1d81b2a2cf9202f13def0: Status 404 returned error can't find the container with id 6c01aac4fe689f75280c369b72844e718bbcda71d9d1d81b2a2cf9202f13def0 Feb 17 18:04:18 crc kubenswrapper[4680]: I0217 18:04:18.802643 4680 generic.go:334] "Generic (PLEG): container finished" podID="95585ae8-c7e0-415d-b239-3c078e4d0a7e" containerID="67d8c6a37e4e05cc774793d5c9349a72df97eb750ef23b13f24875fcbfbf089a" exitCode=0 Feb 17 18:04:18 crc kubenswrapper[4680]: I0217 18:04:18.802731 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95585ae8-c7e0-415d-b239-3c078e4d0a7e","Type":"ContainerDied","Data":"67d8c6a37e4e05cc774793d5c9349a72df97eb750ef23b13f24875fcbfbf089a"} Feb 17 18:04:18 crc kubenswrapper[4680]: I0217 18:04:18.810451 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4d07a220-fd50-489d-9df4-ba9ed7ce2c95","Type":"ContainerStarted","Data":"6c01aac4fe689f75280c369b72844e718bbcda71d9d1d81b2a2cf9202f13def0"} Feb 17 18:04:18 crc kubenswrapper[4680]: I0217 18:04:18.941155 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.093868 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-internal-tls-certs\") pod \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.095547 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-config-data\") pod \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.095722 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.095822 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95585ae8-c7e0-415d-b239-3c078e4d0a7e-logs\") pod \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.095893 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-scripts\") pod \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.096006 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-combined-ca-bundle\") pod \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.096143 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnbrz\" (UniqueName: \"kubernetes.io/projected/95585ae8-c7e0-415d-b239-3c078e4d0a7e-kube-api-access-jnbrz\") pod \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.096269 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95585ae8-c7e0-415d-b239-3c078e4d0a7e-httpd-run\") pod \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\" (UID: \"95585ae8-c7e0-415d-b239-3c078e4d0a7e\") " Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.096767 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95585ae8-c7e0-415d-b239-3c078e4d0a7e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "95585ae8-c7e0-415d-b239-3c078e4d0a7e" (UID: "95585ae8-c7e0-415d-b239-3c078e4d0a7e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.097241 4680 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95585ae8-c7e0-415d-b239-3c078e4d0a7e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.099611 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95585ae8-c7e0-415d-b239-3c078e4d0a7e-logs" (OuterVolumeSpecName: "logs") pod "95585ae8-c7e0-415d-b239-3c078e4d0a7e" (UID: "95585ae8-c7e0-415d-b239-3c078e4d0a7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.122563 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "95585ae8-c7e0-415d-b239-3c078e4d0a7e" (UID: "95585ae8-c7e0-415d-b239-3c078e4d0a7e"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.122951 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95585ae8-c7e0-415d-b239-3c078e4d0a7e-kube-api-access-jnbrz" (OuterVolumeSpecName: "kube-api-access-jnbrz") pod "95585ae8-c7e0-415d-b239-3c078e4d0a7e" (UID: "95585ae8-c7e0-415d-b239-3c078e4d0a7e"). InnerVolumeSpecName "kube-api-access-jnbrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.128510 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-scripts" (OuterVolumeSpecName: "scripts") pod "95585ae8-c7e0-415d-b239-3c078e4d0a7e" (UID: "95585ae8-c7e0-415d-b239-3c078e4d0a7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.192051 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "95585ae8-c7e0-415d-b239-3c078e4d0a7e" (UID: "95585ae8-c7e0-415d-b239-3c078e4d0a7e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.192391 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-config-data" (OuterVolumeSpecName: "config-data") pod "95585ae8-c7e0-415d-b239-3c078e4d0a7e" (UID: "95585ae8-c7e0-415d-b239-3c078e4d0a7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.193300 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95585ae8-c7e0-415d-b239-3c078e4d0a7e" (UID: "95585ae8-c7e0-415d-b239-3c078e4d0a7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.201661 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.201864 4680 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.202000 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95585ae8-c7e0-415d-b239-3c078e4d0a7e-logs\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.202119 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.202224 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.202394 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnbrz\" (UniqueName: \"kubernetes.io/projected/95585ae8-c7e0-415d-b239-3c078e4d0a7e-kube-api-access-jnbrz\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.202506 4680 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95585ae8-c7e0-415d-b239-3c078e4d0a7e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.219452 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8ccfcccc8-2qwb2" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.262562 4680 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.304580 4680 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.417309 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6978d4c7cb-4k7vb" podUID="91980b55-1251-4404-a22d-1572dd91ec8a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.819813 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4d07a220-fd50-489d-9df4-ba9ed7ce2c95","Type":"ContainerStarted","Data":"f87ea159ca2c794c7d4018d7f744a4bf8ae5be0d2f297826d3b47942790823d5"} Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.820161 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4d07a220-fd50-489d-9df4-ba9ed7ce2c95","Type":"ContainerStarted","Data":"65f93278d311b364050201917c0b0d28399c7a69f73d623a08cd3bfc12ae523b"} Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.822809 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95585ae8-c7e0-415d-b239-3c078e4d0a7e","Type":"ContainerDied","Data":"94be2f4ef7c0abc886f968595440a745a2dae7c7589c85f457804ba6d6daa4a2"} Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.822858 4680 scope.go:117] "RemoveContainer" containerID="67d8c6a37e4e05cc774793d5c9349a72df97eb750ef23b13f24875fcbfbf089a" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.822866 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.844590 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.84456645 podStartE2EDuration="3.84456645s" podCreationTimestamp="2026-02-17 18:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:04:19.83849106 +0000 UTC m=+1189.389389739" watchObservedRunningTime="2026-02-17 18:04:19.84456645 +0000 UTC m=+1189.395465129" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.859709 4680 scope.go:117] "RemoveContainer" containerID="782e545fb1f32b809f690585710f6b36072eddbf5a37bb9b577aae0fe02d7cdb" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.923159 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.931074 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.958772 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 18:04:19 crc kubenswrapper[4680]: E0217 18:04:19.959180 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95585ae8-c7e0-415d-b239-3c078e4d0a7e" containerName="glance-log" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.959197 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="95585ae8-c7e0-415d-b239-3c078e4d0a7e" containerName="glance-log" Feb 17 18:04:19 crc kubenswrapper[4680]: E0217 18:04:19.959214 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95585ae8-c7e0-415d-b239-3c078e4d0a7e" containerName="glance-httpd" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.959220 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="95585ae8-c7e0-415d-b239-3c078e4d0a7e" containerName="glance-httpd" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.959394 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="95585ae8-c7e0-415d-b239-3c078e4d0a7e" containerName="glance-log" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.959415 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="95585ae8-c7e0-415d-b239-3c078e4d0a7e" containerName="glance-httpd" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.960269 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.963365 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.964183 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 18:04:19 crc kubenswrapper[4680]: I0217 18:04:19.986377 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.019151 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.019237 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.019299 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.019352 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.019382 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.019416 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.019507 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89wvg\" (UniqueName: \"kubernetes.io/projected/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-kube-api-access-89wvg\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.019536 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.120072 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.120400 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.120429 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.120448 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.120477 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.120546 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89wvg\" (UniqueName: \"kubernetes.io/projected/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-kube-api-access-89wvg\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.120569 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.120622 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.120753 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.121895 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.126925 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.133844 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.137045 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.137916 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.147631 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.175338 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.176143 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89wvg\" (UniqueName: \"kubernetes.io/projected/9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c-kube-api-access-89wvg\") pod \"glance-default-internal-api-0\" (UID: \"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c\") " pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.287917 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.679866 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9kszg"] Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.681995 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9kszg" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.690476 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rk8tv" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.690716 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.690875 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.729242 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9kszg"] Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.852976 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-config-data\") pod \"nova-cell0-conductor-db-sync-9kszg\" (UID: \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\") " pod="openstack/nova-cell0-conductor-db-sync-9kszg" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.853065 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9kszg\" (UID: \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\") " pod="openstack/nova-cell0-conductor-db-sync-9kszg" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.853146 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-scripts\") pod \"nova-cell0-conductor-db-sync-9kszg\" (UID: \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\") " pod="openstack/nova-cell0-conductor-db-sync-9kszg" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.853249 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdjpk\" (UniqueName: \"kubernetes.io/projected/0cf000f9-0745-4550-84cc-2d2d1d8520fd-kube-api-access-vdjpk\") pod \"nova-cell0-conductor-db-sync-9kszg\" (UID: \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\") " pod="openstack/nova-cell0-conductor-db-sync-9kszg" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.858075 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.954571 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-config-data\") pod \"nova-cell0-conductor-db-sync-9kszg\" (UID: \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\") " pod="openstack/nova-cell0-conductor-db-sync-9kszg" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.954684 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9kszg\" (UID: \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\") " pod="openstack/nova-cell0-conductor-db-sync-9kszg" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.954736 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-scripts\") pod \"nova-cell0-conductor-db-sync-9kszg\" (UID: \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\") " pod="openstack/nova-cell0-conductor-db-sync-9kszg" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.954795 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdjpk\" (UniqueName: \"kubernetes.io/projected/0cf000f9-0745-4550-84cc-2d2d1d8520fd-kube-api-access-vdjpk\") pod \"nova-cell0-conductor-db-sync-9kszg\" (UID: \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\") " pod="openstack/nova-cell0-conductor-db-sync-9kszg" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.965101 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-scripts\") pod \"nova-cell0-conductor-db-sync-9kszg\" (UID: \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\") " pod="openstack/nova-cell0-conductor-db-sync-9kszg" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.968444 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="878b21ac-092c-4e88-ba08-f59d0b220d79" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.170:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.969560 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9kszg\" (UID: \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\") " pod="openstack/nova-cell0-conductor-db-sync-9kszg" Feb 17 18:04:20 crc kubenswrapper[4680]: I0217 18:04:20.991031 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-config-data\") pod \"nova-cell0-conductor-db-sync-9kszg\" (UID: \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\") " pod="openstack/nova-cell0-conductor-db-sync-9kszg" Feb 17 18:04:21 crc kubenswrapper[4680]: I0217 18:04:21.036936 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdjpk\" (UniqueName: \"kubernetes.io/projected/0cf000f9-0745-4550-84cc-2d2d1d8520fd-kube-api-access-vdjpk\") pod \"nova-cell0-conductor-db-sync-9kszg\" (UID: \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\") " pod="openstack/nova-cell0-conductor-db-sync-9kszg" Feb 17 18:04:21 crc kubenswrapper[4680]: I0217 18:04:21.157779 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95585ae8-c7e0-415d-b239-3c078e4d0a7e" path="/var/lib/kubelet/pods/95585ae8-c7e0-415d-b239-3c078e4d0a7e/volumes" Feb 17 18:04:21 crc kubenswrapper[4680]: I0217 18:04:21.313864 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9kszg" Feb 17 18:04:21 crc kubenswrapper[4680]: I0217 18:04:21.894099 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9kszg"] Feb 17 18:04:21 crc kubenswrapper[4680]: I0217 18:04:21.911207 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c","Type":"ContainerStarted","Data":"d7255158d20005f8b4c32bcf62c13d0f6a1c40cb3141c930f2641b91afc97891"} Feb 17 18:04:21 crc kubenswrapper[4680]: I0217 18:04:21.911516 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c","Type":"ContainerStarted","Data":"7867fedbe3eea0d34edfb06024a86c08aeef5be20976b95572877ef1d3c3682e"} Feb 17 18:04:21 crc kubenswrapper[4680]: I0217 18:04:21.953901 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a0986f8d-da98-4f6e-bd4e-457e4f405022","Type":"ContainerStarted","Data":"7a3013aefb2ea79592a277f656341ac839902fbe15fa84315abb8ff90a37176a"} Feb 17 18:04:21 crc kubenswrapper[4680]: I0217 18:04:21.955890 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="878b21ac-092c-4e88-ba08-f59d0b220d79" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.170:8776/healthcheck\": context deadline exceeded" Feb 17 18:04:21 crc kubenswrapper[4680]: I0217 18:04:21.989257 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.075675852 podStartE2EDuration="37.989235452s" podCreationTimestamp="2026-02-17 18:03:44 +0000 UTC" firstStartedPulling="2026-02-17 18:03:46.4259594 +0000 UTC m=+1155.976858079" lastFinishedPulling="2026-02-17 18:04:21.33951899 +0000 UTC m=+1190.890417679" observedRunningTime="2026-02-17 18:04:21.975726809 +0000 UTC m=+1191.526625488" watchObservedRunningTime="2026-02-17 18:04:21.989235452 +0000 UTC m=+1191.540134131" Feb 17 18:04:22 crc kubenswrapper[4680]: I0217 18:04:22.964427 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c","Type":"ContainerStarted","Data":"300a8aea2a86baaca1599c52074292500caff92a444c26010130a62bc3653490"} Feb 17 18:04:22 crc kubenswrapper[4680]: I0217 18:04:22.970197 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9kszg" event={"ID":"0cf000f9-0745-4550-84cc-2d2d1d8520fd","Type":"ContainerStarted","Data":"f58336e40649bfb4ab54e08a0fc388c4e3c42efea00a9306e338beb4d75dab9f"} Feb 17 18:04:22 crc kubenswrapper[4680]: I0217 18:04:22.996132 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.996108575 podStartE2EDuration="3.996108575s" podCreationTimestamp="2026-02-17 18:04:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:04:22.986763461 +0000 UTC m=+1192.537662140" watchObservedRunningTime="2026-02-17 18:04:22.996108575 +0000 UTC m=+1192.547007254" Feb 17 18:04:23 crc kubenswrapper[4680]: I0217 18:04:23.237680 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="986e1325-3d30-40e7-b433-64a45965a205" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 18:04:25 crc kubenswrapper[4680]: I0217 18:04:25.601661 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 17 18:04:27 crc kubenswrapper[4680]: I0217 18:04:27.239645 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 18:04:27 crc kubenswrapper[4680]: I0217 18:04:27.239979 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 18:04:27 crc kubenswrapper[4680]: I0217 18:04:27.298002 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 18:04:27 crc kubenswrapper[4680]: I0217 18:04:27.362167 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 18:04:28 crc kubenswrapper[4680]: I0217 18:04:28.026024 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 18:04:28 crc kubenswrapper[4680]: I0217 18:04:28.026062 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 18:04:29 crc kubenswrapper[4680]: I0217 18:04:29.204248 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8ccfcccc8-2qwb2" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 17 18:04:29 crc kubenswrapper[4680]: I0217 18:04:29.417426 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6978d4c7cb-4k7vb" podUID="91980b55-1251-4404-a22d-1572dd91ec8a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 17 18:04:30 crc kubenswrapper[4680]: I0217 18:04:30.307157 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 18:04:30 crc kubenswrapper[4680]: I0217 18:04:30.307519 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 18:04:30 crc kubenswrapper[4680]: I0217 18:04:30.372682 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 18:04:30 crc kubenswrapper[4680]: I0217 18:04:30.380109 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 18:04:31 crc kubenswrapper[4680]: I0217 18:04:31.061143 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 18:04:31 crc kubenswrapper[4680]: I0217 18:04:31.061185 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 18:04:31 crc kubenswrapper[4680]: I0217 18:04:31.170237 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 18:04:31 crc kubenswrapper[4680]: I0217 18:04:31.170283 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 18:04:34 crc kubenswrapper[4680]: I0217 18:04:34.756852 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 18:04:34 crc kubenswrapper[4680]: I0217 18:04:34.757523 4680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 18:04:34 crc kubenswrapper[4680]: I0217 18:04:34.857520 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 18:04:38 crc kubenswrapper[4680]: I0217 18:04:38.143559 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9kszg" event={"ID":"0cf000f9-0745-4550-84cc-2d2d1d8520fd","Type":"ContainerStarted","Data":"f31167c43edc01f5787eed5a377c18b9b57e257877217cc954bed19168a5f5c0"} Feb 17 18:04:38 crc kubenswrapper[4680]: I0217 18:04:38.170062 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-9kszg" podStartSLOduration=2.484821098 podStartE2EDuration="18.170035705s" podCreationTimestamp="2026-02-17 18:04:20 +0000 UTC" firstStartedPulling="2026-02-17 18:04:21.953842792 +0000 UTC m=+1191.504741471" lastFinishedPulling="2026-02-17 18:04:37.639057399 +0000 UTC m=+1207.189956078" observedRunningTime="2026-02-17 18:04:38.157806331 +0000 UTC m=+1207.708705030" watchObservedRunningTime="2026-02-17 18:04:38.170035705 +0000 UTC m=+1207.720934394" Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.161744 4680 generic.go:334] "Generic (PLEG): container finished" podID="986e1325-3d30-40e7-b433-64a45965a205" containerID="36c2f67b03f2cfdc0eaf0a3010b07b9ed3f91d665fdccf83cca1c1217cdc7b0a" exitCode=137 Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.161832 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986e1325-3d30-40e7-b433-64a45965a205","Type":"ContainerDied","Data":"36c2f67b03f2cfdc0eaf0a3010b07b9ed3f91d665fdccf83cca1c1217cdc7b0a"} Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.570548 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.678729 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986e1325-3d30-40e7-b433-64a45965a205-log-httpd\") pod \"986e1325-3d30-40e7-b433-64a45965a205\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.678832 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-combined-ca-bundle\") pod \"986e1325-3d30-40e7-b433-64a45965a205\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.678871 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986e1325-3d30-40e7-b433-64a45965a205-run-httpd\") pod \"986e1325-3d30-40e7-b433-64a45965a205\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.678899 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-config-data\") pod \"986e1325-3d30-40e7-b433-64a45965a205\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.678949 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-scripts\") pod \"986e1325-3d30-40e7-b433-64a45965a205\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.678987 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-sg-core-conf-yaml\") pod \"986e1325-3d30-40e7-b433-64a45965a205\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.679046 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m7mp\" (UniqueName: \"kubernetes.io/projected/986e1325-3d30-40e7-b433-64a45965a205-kube-api-access-9m7mp\") pod \"986e1325-3d30-40e7-b433-64a45965a205\" (UID: \"986e1325-3d30-40e7-b433-64a45965a205\") " Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.680147 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/986e1325-3d30-40e7-b433-64a45965a205-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "986e1325-3d30-40e7-b433-64a45965a205" (UID: "986e1325-3d30-40e7-b433-64a45965a205"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.685605 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-scripts" (OuterVolumeSpecName: "scripts") pod "986e1325-3d30-40e7-b433-64a45965a205" (UID: "986e1325-3d30-40e7-b433-64a45965a205"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.695216 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/986e1325-3d30-40e7-b433-64a45965a205-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "986e1325-3d30-40e7-b433-64a45965a205" (UID: "986e1325-3d30-40e7-b433-64a45965a205"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.726881 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/986e1325-3d30-40e7-b433-64a45965a205-kube-api-access-9m7mp" (OuterVolumeSpecName: "kube-api-access-9m7mp") pod "986e1325-3d30-40e7-b433-64a45965a205" (UID: "986e1325-3d30-40e7-b433-64a45965a205"). InnerVolumeSpecName "kube-api-access-9m7mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.740543 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "986e1325-3d30-40e7-b433-64a45965a205" (UID: "986e1325-3d30-40e7-b433-64a45965a205"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.782707 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m7mp\" (UniqueName: \"kubernetes.io/projected/986e1325-3d30-40e7-b433-64a45965a205-kube-api-access-9m7mp\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.782748 4680 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986e1325-3d30-40e7-b433-64a45965a205-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.782758 4680 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/986e1325-3d30-40e7-b433-64a45965a205-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.782771 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.782782 4680 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.811794 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "986e1325-3d30-40e7-b433-64a45965a205" (UID: "986e1325-3d30-40e7-b433-64a45965a205"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.845468 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-config-data" (OuterVolumeSpecName: "config-data") pod "986e1325-3d30-40e7-b433-64a45965a205" (UID: "986e1325-3d30-40e7-b433-64a45965a205"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.888072 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:39 crc kubenswrapper[4680]: I0217 18:04:39.888152 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986e1325-3d30-40e7-b433-64a45965a205-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.176548 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"986e1325-3d30-40e7-b433-64a45965a205","Type":"ContainerDied","Data":"e24cdec87dc1df3b4b070368e0c5c250885c989a9dd775681e6a1cc6d5d18b4d"} Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.176622 4680 scope.go:117] "RemoveContainer" containerID="36c2f67b03f2cfdc0eaf0a3010b07b9ed3f91d665fdccf83cca1c1217cdc7b0a" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.176835 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.221440 4680 scope.go:117] "RemoveContainer" containerID="9cc85ee24eaa25c771b33f03c6bfd5018a9905ecc29506c64a535d4defccb51e" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.260012 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.281686 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.282348 4680 scope.go:117] "RemoveContainer" containerID="440b835f6397c01b13911816cffa50944424acfdfe6b6b63d3d4601d56245662" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.293383 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:04:40 crc kubenswrapper[4680]: E0217 18:04:40.293925 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986e1325-3d30-40e7-b433-64a45965a205" containerName="sg-core" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.293944 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="986e1325-3d30-40e7-b433-64a45965a205" containerName="sg-core" Feb 17 18:04:40 crc kubenswrapper[4680]: E0217 18:04:40.293968 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986e1325-3d30-40e7-b433-64a45965a205" containerName="ceilometer-central-agent" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.293975 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="986e1325-3d30-40e7-b433-64a45965a205" containerName="ceilometer-central-agent" Feb 17 18:04:40 crc kubenswrapper[4680]: E0217 18:04:40.293988 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986e1325-3d30-40e7-b433-64a45965a205" containerName="proxy-httpd" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.293994 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="986e1325-3d30-40e7-b433-64a45965a205" containerName="proxy-httpd" Feb 17 18:04:40 crc kubenswrapper[4680]: E0217 18:04:40.294012 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986e1325-3d30-40e7-b433-64a45965a205" containerName="ceilometer-notification-agent" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.294018 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="986e1325-3d30-40e7-b433-64a45965a205" containerName="ceilometer-notification-agent" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.294180 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="986e1325-3d30-40e7-b433-64a45965a205" containerName="proxy-httpd" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.294198 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="986e1325-3d30-40e7-b433-64a45965a205" containerName="ceilometer-notification-agent" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.294210 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="986e1325-3d30-40e7-b433-64a45965a205" containerName="sg-core" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.294227 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="986e1325-3d30-40e7-b433-64a45965a205" containerName="ceilometer-central-agent" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.296022 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.300623 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.300835 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.306221 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.331383 4680 scope.go:117] "RemoveContainer" containerID="746d627d565816f79c6f47430b25bf1457d8168c311147e7c88d42b9add4bb74" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.399701 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.399962 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-run-httpd\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.400139 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.400243 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-config-data\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.400360 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-log-httpd\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.400469 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6snf6\" (UniqueName: \"kubernetes.io/projected/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-kube-api-access-6snf6\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.400831 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-scripts\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.502644 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.502705 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-config-data\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.502724 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-log-httpd\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.502795 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6snf6\" (UniqueName: \"kubernetes.io/projected/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-kube-api-access-6snf6\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.502928 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-scripts\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.502967 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.503006 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-run-httpd\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.503706 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-run-httpd\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.503751 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-log-httpd\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.522115 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-scripts\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.522708 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-config-data\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.527777 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.528270 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.532084 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6snf6\" (UniqueName: \"kubernetes.io/projected/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-kube-api-access-6snf6\") pod \"ceilometer-0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " pod="openstack/ceilometer-0" Feb 17 18:04:40 crc kubenswrapper[4680]: I0217 18:04:40.641483 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:04:41 crc kubenswrapper[4680]: I0217 18:04:41.116413 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="986e1325-3d30-40e7-b433-64a45965a205" path="/var/lib/kubelet/pods/986e1325-3d30-40e7-b433-64a45965a205/volumes" Feb 17 18:04:41 crc kubenswrapper[4680]: I0217 18:04:41.161459 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:04:41 crc kubenswrapper[4680]: W0217 18:04:41.167524 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3ce664d_c8a3_4f78_9e02_f3a9034f45c0.slice/crio-1c4f6a7ad8f064ae0cb5b333d61a40c4b1063556a4cfd2c352d8c116205fd1dc WatchSource:0}: Error finding container 1c4f6a7ad8f064ae0cb5b333d61a40c4b1063556a4cfd2c352d8c116205fd1dc: Status 404 returned error can't find the container with id 1c4f6a7ad8f064ae0cb5b333d61a40c4b1063556a4cfd2c352d8c116205fd1dc Feb 17 18:04:41 crc kubenswrapper[4680]: I0217 18:04:41.187570 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0","Type":"ContainerStarted","Data":"1c4f6a7ad8f064ae0cb5b333d61a40c4b1063556a4cfd2c352d8c116205fd1dc"} Feb 17 18:04:42 crc kubenswrapper[4680]: I0217 18:04:42.198947 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0","Type":"ContainerStarted","Data":"a2af538a802796565d56139ed42bbdebee7c7b1753cd0acf87fb8d6315680b75"} Feb 17 18:04:42 crc kubenswrapper[4680]: I0217 18:04:42.600353 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:04:42 crc kubenswrapper[4680]: I0217 18:04:42.667217 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:04:43 crc kubenswrapper[4680]: I0217 18:04:43.222778 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0","Type":"ContainerStarted","Data":"6cad941674f92ec56f7cdd2ada09280e89999c18f0d1b25b0f13d6c64c9b9b04"} Feb 17 18:04:43 crc kubenswrapper[4680]: I0217 18:04:43.223118 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0","Type":"ContainerStarted","Data":"730cef110c72aba156a961929069c47a1b2ccfb44284322ec8d4942e9b0a46e1"} Feb 17 18:04:44 crc kubenswrapper[4680]: I0217 18:04:44.828467 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:04:45 crc kubenswrapper[4680]: I0217 18:04:45.003081 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6978d4c7cb-4k7vb" Feb 17 18:04:45 crc kubenswrapper[4680]: I0217 18:04:45.088153 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8ccfcccc8-2qwb2"] Feb 17 18:04:45 crc kubenswrapper[4680]: I0217 18:04:45.252639 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8ccfcccc8-2qwb2" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon-log" containerID="cri-o://5dbc357194ba5ec72add1e74a2d767f31fc9addc3c6a23c67bde601fb330b16d" gracePeriod=30 Feb 17 18:04:45 crc kubenswrapper[4680]: I0217 18:04:45.253503 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8ccfcccc8-2qwb2" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon" containerID="cri-o://bb43b7d8073115a72c22a9eb5e451ca9ee4e1652249db208b2313af6b24dab59" gracePeriod=30 Feb 17 18:04:45 crc kubenswrapper[4680]: I0217 18:04:45.253786 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0","Type":"ContainerStarted","Data":"586f0d7f042acb17edb937ee4f756bd1ee3e2eb1d8cefee429b8886ea30f1181"} Feb 17 18:04:45 crc kubenswrapper[4680]: I0217 18:04:45.253918 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 18:04:47 crc kubenswrapper[4680]: I0217 18:04:47.430178 4680 scope.go:117] "RemoveContainer" containerID="d1cba4e673fbd50e30069835719858d14ee86d7b041731bc112f0a7b888dbaed" Feb 17 18:04:47 crc kubenswrapper[4680]: I0217 18:04:47.468976 4680 scope.go:117] "RemoveContainer" containerID="8f6e999fd77af53beb558cbb8ca6bdfa43444b2899f985788e945e06800943d2" Feb 17 18:04:47 crc kubenswrapper[4680]: I0217 18:04:47.508828 4680 scope.go:117] "RemoveContainer" containerID="c1dd290717706209615a11c583cc9c0c9a60bf733b90f75142631b93d6bbf810" Feb 17 18:04:49 crc kubenswrapper[4680]: I0217 18:04:49.202058 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8ccfcccc8-2qwb2" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 17 18:04:49 crc kubenswrapper[4680]: I0217 18:04:49.286783 4680 generic.go:334] "Generic (PLEG): container finished" podID="d7b7455b-130d-4711-9174-16e352b16455" containerID="bb43b7d8073115a72c22a9eb5e451ca9ee4e1652249db208b2313af6b24dab59" exitCode=0 Feb 17 18:04:49 crc kubenswrapper[4680]: I0217 18:04:49.286823 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8ccfcccc8-2qwb2" event={"ID":"d7b7455b-130d-4711-9174-16e352b16455","Type":"ContainerDied","Data":"bb43b7d8073115a72c22a9eb5e451ca9ee4e1652249db208b2313af6b24dab59"} Feb 17 18:04:49 crc kubenswrapper[4680]: I0217 18:04:49.286880 4680 scope.go:117] "RemoveContainer" containerID="8b01e5dcc6cc0527ee746e19a1edb564cad531d2e71631a44d11071138f3ae62" Feb 17 18:04:55 crc kubenswrapper[4680]: I0217 18:04:55.349128 4680 generic.go:334] "Generic (PLEG): container finished" podID="0cf000f9-0745-4550-84cc-2d2d1d8520fd" containerID="f31167c43edc01f5787eed5a377c18b9b57e257877217cc954bed19168a5f5c0" exitCode=0 Feb 17 18:04:55 crc kubenswrapper[4680]: I0217 18:04:55.349169 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9kszg" event={"ID":"0cf000f9-0745-4550-84cc-2d2d1d8520fd","Type":"ContainerDied","Data":"f31167c43edc01f5787eed5a377c18b9b57e257877217cc954bed19168a5f5c0"} Feb 17 18:04:55 crc kubenswrapper[4680]: I0217 18:04:55.367007 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=12.386129692 podStartE2EDuration="15.366989439s" podCreationTimestamp="2026-02-17 18:04:40 +0000 UTC" firstStartedPulling="2026-02-17 18:04:41.172717897 +0000 UTC m=+1210.723616566" lastFinishedPulling="2026-02-17 18:04:44.153577634 +0000 UTC m=+1213.704476313" observedRunningTime="2026-02-17 18:04:45.28672247 +0000 UTC m=+1214.837621149" watchObservedRunningTime="2026-02-17 18:04:55.366989439 +0000 UTC m=+1224.917888118" Feb 17 18:04:56 crc kubenswrapper[4680]: I0217 18:04:56.699454 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9kszg" Feb 17 18:04:56 crc kubenswrapper[4680]: I0217 18:04:56.801498 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-combined-ca-bundle\") pod \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\" (UID: \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\") " Feb 17 18:04:56 crc kubenswrapper[4680]: I0217 18:04:56.801576 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-config-data\") pod \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\" (UID: \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\") " Feb 17 18:04:56 crc kubenswrapper[4680]: I0217 18:04:56.801620 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdjpk\" (UniqueName: \"kubernetes.io/projected/0cf000f9-0745-4550-84cc-2d2d1d8520fd-kube-api-access-vdjpk\") pod \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\" (UID: \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\") " Feb 17 18:04:56 crc kubenswrapper[4680]: I0217 18:04:56.801850 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-scripts\") pod \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\" (UID: \"0cf000f9-0745-4550-84cc-2d2d1d8520fd\") " Feb 17 18:04:56 crc kubenswrapper[4680]: I0217 18:04:56.809517 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-scripts" (OuterVolumeSpecName: "scripts") pod "0cf000f9-0745-4550-84cc-2d2d1d8520fd" (UID: "0cf000f9-0745-4550-84cc-2d2d1d8520fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:56 crc kubenswrapper[4680]: I0217 18:04:56.810463 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf000f9-0745-4550-84cc-2d2d1d8520fd-kube-api-access-vdjpk" (OuterVolumeSpecName: "kube-api-access-vdjpk") pod "0cf000f9-0745-4550-84cc-2d2d1d8520fd" (UID: "0cf000f9-0745-4550-84cc-2d2d1d8520fd"). InnerVolumeSpecName "kube-api-access-vdjpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:04:56 crc kubenswrapper[4680]: I0217 18:04:56.832302 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-config-data" (OuterVolumeSpecName: "config-data") pod "0cf000f9-0745-4550-84cc-2d2d1d8520fd" (UID: "0cf000f9-0745-4550-84cc-2d2d1d8520fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:56 crc kubenswrapper[4680]: I0217 18:04:56.834411 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0cf000f9-0745-4550-84cc-2d2d1d8520fd" (UID: "0cf000f9-0745-4550-84cc-2d2d1d8520fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:04:56 crc kubenswrapper[4680]: I0217 18:04:56.902956 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:56 crc kubenswrapper[4680]: I0217 18:04:56.902988 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:56 crc kubenswrapper[4680]: I0217 18:04:56.903000 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf000f9-0745-4550-84cc-2d2d1d8520fd-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:56 crc kubenswrapper[4680]: I0217 18:04:56.903010 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdjpk\" (UniqueName: \"kubernetes.io/projected/0cf000f9-0745-4550-84cc-2d2d1d8520fd-kube-api-access-vdjpk\") on node \"crc\" DevicePath \"\"" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.368040 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9kszg" event={"ID":"0cf000f9-0745-4550-84cc-2d2d1d8520fd","Type":"ContainerDied","Data":"f58336e40649bfb4ab54e08a0fc388c4e3c42efea00a9306e338beb4d75dab9f"} Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.368076 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f58336e40649bfb4ab54e08a0fc388c4e3c42efea00a9306e338beb4d75dab9f" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.368084 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9kszg" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.480950 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 18:04:57 crc kubenswrapper[4680]: E0217 18:04:57.481588 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf000f9-0745-4550-84cc-2d2d1d8520fd" containerName="nova-cell0-conductor-db-sync" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.481666 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf000f9-0745-4550-84cc-2d2d1d8520fd" containerName="nova-cell0-conductor-db-sync" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.481926 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf000f9-0745-4550-84cc-2d2d1d8520fd" containerName="nova-cell0-conductor-db-sync" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.482727 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.486256 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rk8tv" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.486692 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.495673 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.618943 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac239796-b255-4e18-a027-2bd872c8ccc9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ac239796-b255-4e18-a027-2bd872c8ccc9\") " pod="openstack/nova-cell0-conductor-0" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.619278 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac239796-b255-4e18-a027-2bd872c8ccc9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ac239796-b255-4e18-a027-2bd872c8ccc9\") " pod="openstack/nova-cell0-conductor-0" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.619460 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7jss\" (UniqueName: \"kubernetes.io/projected/ac239796-b255-4e18-a027-2bd872c8ccc9-kube-api-access-m7jss\") pod \"nova-cell0-conductor-0\" (UID: \"ac239796-b255-4e18-a027-2bd872c8ccc9\") " pod="openstack/nova-cell0-conductor-0" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.721525 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac239796-b255-4e18-a027-2bd872c8ccc9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ac239796-b255-4e18-a027-2bd872c8ccc9\") " pod="openstack/nova-cell0-conductor-0" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.721579 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac239796-b255-4e18-a027-2bd872c8ccc9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ac239796-b255-4e18-a027-2bd872c8ccc9\") " pod="openstack/nova-cell0-conductor-0" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.721655 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7jss\" (UniqueName: \"kubernetes.io/projected/ac239796-b255-4e18-a027-2bd872c8ccc9-kube-api-access-m7jss\") pod \"nova-cell0-conductor-0\" (UID: \"ac239796-b255-4e18-a027-2bd872c8ccc9\") " pod="openstack/nova-cell0-conductor-0" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.725902 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac239796-b255-4e18-a027-2bd872c8ccc9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ac239796-b255-4e18-a027-2bd872c8ccc9\") " pod="openstack/nova-cell0-conductor-0" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.729998 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac239796-b255-4e18-a027-2bd872c8ccc9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ac239796-b255-4e18-a027-2bd872c8ccc9\") " pod="openstack/nova-cell0-conductor-0" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.741798 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7jss\" (UniqueName: \"kubernetes.io/projected/ac239796-b255-4e18-a027-2bd872c8ccc9-kube-api-access-m7jss\") pod \"nova-cell0-conductor-0\" (UID: \"ac239796-b255-4e18-a027-2bd872c8ccc9\") " pod="openstack/nova-cell0-conductor-0" Feb 17 18:04:57 crc kubenswrapper[4680]: I0217 18:04:57.799884 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 18:04:58 crc kubenswrapper[4680]: I0217 18:04:58.269103 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 18:04:58 crc kubenswrapper[4680]: W0217 18:04:58.278664 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac239796_b255_4e18_a027_2bd872c8ccc9.slice/crio-f4da99aa5cb667503b696e726970a57da629c53de78b4b200180f883a9ca0471 WatchSource:0}: Error finding container f4da99aa5cb667503b696e726970a57da629c53de78b4b200180f883a9ca0471: Status 404 returned error can't find the container with id f4da99aa5cb667503b696e726970a57da629c53de78b4b200180f883a9ca0471 Feb 17 18:04:58 crc kubenswrapper[4680]: I0217 18:04:58.376891 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ac239796-b255-4e18-a027-2bd872c8ccc9","Type":"ContainerStarted","Data":"f4da99aa5cb667503b696e726970a57da629c53de78b4b200180f883a9ca0471"} Feb 17 18:04:59 crc kubenswrapper[4680]: I0217 18:04:59.202136 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8ccfcccc8-2qwb2" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 17 18:04:59 crc kubenswrapper[4680]: I0217 18:04:59.386130 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ac239796-b255-4e18-a027-2bd872c8ccc9","Type":"ContainerStarted","Data":"ce6bf06a8832c8a6b703812b054b1fc07bc89174c6037d76ecad6b61063cd108"} Feb 17 18:05:00 crc kubenswrapper[4680]: I0217 18:05:00.395685 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 17 18:05:07 crc kubenswrapper[4680]: I0217 18:05:07.826661 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 17 18:05:07 crc kubenswrapper[4680]: I0217 18:05:07.844378 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=10.844359088000001 podStartE2EDuration="10.844359088s" podCreationTimestamp="2026-02-17 18:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:04:59.404203201 +0000 UTC m=+1228.955101880" watchObservedRunningTime="2026-02-17 18:05:07.844359088 +0000 UTC m=+1237.395257767" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.348382 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-wpttw"] Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.349595 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wpttw" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.352394 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.353842 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.369605 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-wpttw"] Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.418688 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk9fd\" (UniqueName: \"kubernetes.io/projected/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-kube-api-access-qk9fd\") pod \"nova-cell0-cell-mapping-wpttw\" (UID: \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\") " pod="openstack/nova-cell0-cell-mapping-wpttw" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.418800 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-config-data\") pod \"nova-cell0-cell-mapping-wpttw\" (UID: \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\") " pod="openstack/nova-cell0-cell-mapping-wpttw" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.418843 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-scripts\") pod \"nova-cell0-cell-mapping-wpttw\" (UID: \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\") " pod="openstack/nova-cell0-cell-mapping-wpttw" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.418872 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wpttw\" (UID: \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\") " pod="openstack/nova-cell0-cell-mapping-wpttw" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.526477 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wpttw\" (UID: \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\") " pod="openstack/nova-cell0-cell-mapping-wpttw" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.526695 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk9fd\" (UniqueName: \"kubernetes.io/projected/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-kube-api-access-qk9fd\") pod \"nova-cell0-cell-mapping-wpttw\" (UID: \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\") " pod="openstack/nova-cell0-cell-mapping-wpttw" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.527138 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-config-data\") pod \"nova-cell0-cell-mapping-wpttw\" (UID: \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\") " pod="openstack/nova-cell0-cell-mapping-wpttw" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.527236 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-scripts\") pod \"nova-cell0-cell-mapping-wpttw\" (UID: \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\") " pod="openstack/nova-cell0-cell-mapping-wpttw" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.535743 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-config-data\") pod \"nova-cell0-cell-mapping-wpttw\" (UID: \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\") " pod="openstack/nova-cell0-cell-mapping-wpttw" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.552105 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wpttw\" (UID: \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\") " pod="openstack/nova-cell0-cell-mapping-wpttw" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.561911 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-scripts\") pod \"nova-cell0-cell-mapping-wpttw\" (UID: \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\") " pod="openstack/nova-cell0-cell-mapping-wpttw" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.611000 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk9fd\" (UniqueName: \"kubernetes.io/projected/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-kube-api-access-qk9fd\") pod \"nova-cell0-cell-mapping-wpttw\" (UID: \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\") " pod="openstack/nova-cell0-cell-mapping-wpttw" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.652913 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.654184 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.661586 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.678659 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.679128 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wpttw" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.733701 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.735222 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5b3e0bf-c836-4d70-a6a8-16829590ba9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.735283 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5b3e0bf-c836-4d70-a6a8-16829590ba9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.735372 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx48g\" (UniqueName: \"kubernetes.io/projected/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-kube-api-access-wx48g\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5b3e0bf-c836-4d70-a6a8-16829590ba9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.735393 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.741893 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.785979 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.838902 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64de592f-9a5f-41c8-8333-66a6173d819d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"64de592f-9a5f-41c8-8333-66a6173d819d\") " pod="openstack/nova-api-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.838940 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64de592f-9a5f-41c8-8333-66a6173d819d-logs\") pod \"nova-api-0\" (UID: \"64de592f-9a5f-41c8-8333-66a6173d819d\") " pod="openstack/nova-api-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.838969 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6bbp\" (UniqueName: \"kubernetes.io/projected/64de592f-9a5f-41c8-8333-66a6173d819d-kube-api-access-r6bbp\") pod \"nova-api-0\" (UID: \"64de592f-9a5f-41c8-8333-66a6173d819d\") " pod="openstack/nova-api-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.839008 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5b3e0bf-c836-4d70-a6a8-16829590ba9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.839047 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64de592f-9a5f-41c8-8333-66a6173d819d-config-data\") pod \"nova-api-0\" (UID: \"64de592f-9a5f-41c8-8333-66a6173d819d\") " pod="openstack/nova-api-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.839063 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5b3e0bf-c836-4d70-a6a8-16829590ba9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.839146 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx48g\" (UniqueName: \"kubernetes.io/projected/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-kube-api-access-wx48g\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5b3e0bf-c836-4d70-a6a8-16829590ba9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.870203 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5b3e0bf-c836-4d70-a6a8-16829590ba9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.871983 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5b3e0bf-c836-4d70-a6a8-16829590ba9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.911764 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx48g\" (UniqueName: \"kubernetes.io/projected/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-kube-api-access-wx48g\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5b3e0bf-c836-4d70-a6a8-16829590ba9c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.926215 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.927775 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.938766 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.940703 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/307b6180-63c1-4999-bab9-d3b939f8bddb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"307b6180-63c1-4999-bab9-d3b939f8bddb\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.940867 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/307b6180-63c1-4999-bab9-d3b939f8bddb-config-data\") pod \"nova-scheduler-0\" (UID: \"307b6180-63c1-4999-bab9-d3b939f8bddb\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.941032 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64de592f-9a5f-41c8-8333-66a6173d819d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"64de592f-9a5f-41c8-8333-66a6173d819d\") " pod="openstack/nova-api-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.941129 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64de592f-9a5f-41c8-8333-66a6173d819d-logs\") pod \"nova-api-0\" (UID: \"64de592f-9a5f-41c8-8333-66a6173d819d\") " pod="openstack/nova-api-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.941218 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6bbp\" (UniqueName: \"kubernetes.io/projected/64de592f-9a5f-41c8-8333-66a6173d819d-kube-api-access-r6bbp\") pod \"nova-api-0\" (UID: \"64de592f-9a5f-41c8-8333-66a6173d819d\") " pod="openstack/nova-api-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.941350 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64de592f-9a5f-41c8-8333-66a6173d819d-config-data\") pod \"nova-api-0\" (UID: \"64de592f-9a5f-41c8-8333-66a6173d819d\") " pod="openstack/nova-api-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.941442 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq67g\" (UniqueName: \"kubernetes.io/projected/307b6180-63c1-4999-bab9-d3b939f8bddb-kube-api-access-pq67g\") pod \"nova-scheduler-0\" (UID: \"307b6180-63c1-4999-bab9-d3b939f8bddb\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.947699 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64de592f-9a5f-41c8-8333-66a6173d819d-logs\") pod \"nova-api-0\" (UID: \"64de592f-9a5f-41c8-8333-66a6173d819d\") " pod="openstack/nova-api-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.960150 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64de592f-9a5f-41c8-8333-66a6173d819d-config-data\") pod \"nova-api-0\" (UID: \"64de592f-9a5f-41c8-8333-66a6173d819d\") " pod="openstack/nova-api-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.964440 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64de592f-9a5f-41c8-8333-66a6173d819d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"64de592f-9a5f-41c8-8333-66a6173d819d\") " pod="openstack/nova-api-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.967127 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.983803 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:08 crc kubenswrapper[4680]: I0217 18:05:08.990728 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.006844 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.008815 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6bbp\" (UniqueName: \"kubernetes.io/projected/64de592f-9a5f-41c8-8333-66a6173d819d-kube-api-access-r6bbp\") pod \"nova-api-0\" (UID: \"64de592f-9a5f-41c8-8333-66a6173d819d\") " pod="openstack/nova-api-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.022754 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.031457 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.044584 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/307b6180-63c1-4999-bab9-d3b939f8bddb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"307b6180-63c1-4999-bab9-d3b939f8bddb\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.044643 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/307b6180-63c1-4999-bab9-d3b939f8bddb-config-data\") pod \"nova-scheduler-0\" (UID: \"307b6180-63c1-4999-bab9-d3b939f8bddb\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.044695 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-config-data\") pod \"nova-metadata-0\" (UID: \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\") " pod="openstack/nova-metadata-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.044736 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\") " pod="openstack/nova-metadata-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.044793 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7mvh\" (UniqueName: \"kubernetes.io/projected/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-kube-api-access-f7mvh\") pod \"nova-metadata-0\" (UID: \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\") " pod="openstack/nova-metadata-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.044829 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-logs\") pod \"nova-metadata-0\" (UID: \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\") " pod="openstack/nova-metadata-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.044855 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq67g\" (UniqueName: \"kubernetes.io/projected/307b6180-63c1-4999-bab9-d3b939f8bddb-kube-api-access-pq67g\") pod \"nova-scheduler-0\" (UID: \"307b6180-63c1-4999-bab9-d3b939f8bddb\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.053245 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/307b6180-63c1-4999-bab9-d3b939f8bddb-config-data\") pod \"nova-scheduler-0\" (UID: \"307b6180-63c1-4999-bab9-d3b939f8bddb\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.060616 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/307b6180-63c1-4999-bab9-d3b939f8bddb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"307b6180-63c1-4999-bab9-d3b939f8bddb\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.089494 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq67g\" (UniqueName: \"kubernetes.io/projected/307b6180-63c1-4999-bab9-d3b939f8bddb-kube-api-access-pq67g\") pod \"nova-scheduler-0\" (UID: \"307b6180-63c1-4999-bab9-d3b939f8bddb\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.144114 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-kgh75"] Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.146810 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.147344 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-config-data\") pod \"nova-metadata-0\" (UID: \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\") " pod="openstack/nova-metadata-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.147416 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\") " pod="openstack/nova-metadata-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.147519 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7mvh\" (UniqueName: \"kubernetes.io/projected/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-kube-api-access-f7mvh\") pod \"nova-metadata-0\" (UID: \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\") " pod="openstack/nova-metadata-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.147619 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-logs\") pod \"nova-metadata-0\" (UID: \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\") " pod="openstack/nova-metadata-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.158366 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.158859 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-kgh75"] Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.162716 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\") " pod="openstack/nova-metadata-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.163793 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-logs\") pod \"nova-metadata-0\" (UID: \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\") " pod="openstack/nova-metadata-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.197152 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7mvh\" (UniqueName: \"kubernetes.io/projected/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-kube-api-access-f7mvh\") pod \"nova-metadata-0\" (UID: \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\") " pod="openstack/nova-metadata-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.206475 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8ccfcccc8-2qwb2" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.206588 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.249819 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.250120 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ld4p\" (UniqueName: \"kubernetes.io/projected/434d2204-587d-4dfe-b473-32490d402e2b-kube-api-access-7ld4p\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.250272 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.250476 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-config\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.250676 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.255794 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.309590 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.359249 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-config\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.359388 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.359532 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.359625 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.359692 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ld4p\" (UniqueName: \"kubernetes.io/projected/434d2204-587d-4dfe-b473-32490d402e2b-kube-api-access-7ld4p\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.359740 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.363203 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.364539 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.365488 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-config\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.366054 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.366555 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.367267 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-config-data\") pod \"nova-metadata-0\" (UID: \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\") " pod="openstack/nova-metadata-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.390168 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ld4p\" (UniqueName: \"kubernetes.io/projected/434d2204-587d-4dfe-b473-32490d402e2b-kube-api-access-7ld4p\") pod \"dnsmasq-dns-757b4f8459-kgh75\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.513450 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.663968 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 18:05:09 crc kubenswrapper[4680]: I0217 18:05:09.694346 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-wpttw"] Feb 17 18:05:10 crc kubenswrapper[4680]: I0217 18:05:10.098104 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 18:05:10 crc kubenswrapper[4680]: I0217 18:05:10.274868 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 18:05:10 crc kubenswrapper[4680]: W0217 18:05:10.277671 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64de592f_9a5f_41c8_8333_66a6173d819d.slice/crio-0a11b94b5fe5a4a94a9ff2e29cd434a86a212d610ec284aeb08adf04e5ff2e60 WatchSource:0}: Error finding container 0a11b94b5fe5a4a94a9ff2e29cd434a86a212d610ec284aeb08adf04e5ff2e60: Status 404 returned error can't find the container with id 0a11b94b5fe5a4a94a9ff2e29cd434a86a212d610ec284aeb08adf04e5ff2e60 Feb 17 18:05:10 crc kubenswrapper[4680]: I0217 18:05:10.482019 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 18:05:10 crc kubenswrapper[4680]: W0217 18:05:10.486910 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod307b6180_63c1_4999_bab9_d3b939f8bddb.slice/crio-6f46f53c162cf2f42814330e194969edebebcb757c9ef8f66f29d6e9a550c2f8 WatchSource:0}: Error finding container 6f46f53c162cf2f42814330e194969edebebcb757c9ef8f66f29d6e9a550c2f8: Status 404 returned error can't find the container with id 6f46f53c162cf2f42814330e194969edebebcb757c9ef8f66f29d6e9a550c2f8 Feb 17 18:05:10 crc kubenswrapper[4680]: I0217 18:05:10.515429 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wpttw" event={"ID":"4eea2a6c-918d-47a3-b782-9a0f035b6bdc","Type":"ContainerStarted","Data":"6486b7cb7c1814dd6bb8fa0855f1fe117cdff4dfc777924da50e61487487e818"} Feb 17 18:05:10 crc kubenswrapper[4680]: I0217 18:05:10.516291 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wpttw" event={"ID":"4eea2a6c-918d-47a3-b782-9a0f035b6bdc","Type":"ContainerStarted","Data":"24b6190d4a20dd24123c5fd86ae731ee924f66ecf3f4575606e545cdfcb9fb38"} Feb 17 18:05:10 crc kubenswrapper[4680]: I0217 18:05:10.521631 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a5b3e0bf-c836-4d70-a6a8-16829590ba9c","Type":"ContainerStarted","Data":"b988c929e00ea0217c4a503466ecae6d2529a8884f997f741459612854312e24"} Feb 17 18:05:10 crc kubenswrapper[4680]: I0217 18:05:10.526759 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64de592f-9a5f-41c8-8333-66a6173d819d","Type":"ContainerStarted","Data":"0a11b94b5fe5a4a94a9ff2e29cd434a86a212d610ec284aeb08adf04e5ff2e60"} Feb 17 18:05:10 crc kubenswrapper[4680]: I0217 18:05:10.531708 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"307b6180-63c1-4999-bab9-d3b939f8bddb","Type":"ContainerStarted","Data":"6f46f53c162cf2f42814330e194969edebebcb757c9ef8f66f29d6e9a550c2f8"} Feb 17 18:05:10 crc kubenswrapper[4680]: I0217 18:05:10.540107 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-wpttw" podStartSLOduration=2.540087437 podStartE2EDuration="2.540087437s" podCreationTimestamp="2026-02-17 18:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:05:10.535749141 +0000 UTC m=+1240.086647820" watchObservedRunningTime="2026-02-17 18:05:10.540087437 +0000 UTC m=+1240.090986116" Feb 17 18:05:10 crc kubenswrapper[4680]: I0217 18:05:10.663411 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 18:05:10 crc kubenswrapper[4680]: I0217 18:05:10.784199 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-kgh75"] Feb 17 18:05:10 crc kubenswrapper[4680]: I0217 18:05:10.905560 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 18:05:10 crc kubenswrapper[4680]: W0217 18:05:10.913649 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bd2854a_e9a2_41c3_92fc_1be5f01d44af.slice/crio-ffa8c21faa8eb97eccf646bfac8644d1de23e1b438dd96c73aab80fcd61272ae WatchSource:0}: Error finding container ffa8c21faa8eb97eccf646bfac8644d1de23e1b438dd96c73aab80fcd61272ae: Status 404 returned error can't find the container with id ffa8c21faa8eb97eccf646bfac8644d1de23e1b438dd96c73aab80fcd61272ae Feb 17 18:05:10 crc kubenswrapper[4680]: I0217 18:05:10.931774 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-h6gpf"] Feb 17 18:05:10 crc kubenswrapper[4680]: I0217 18:05:10.933293 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-h6gpf" Feb 17 18:05:10 crc kubenswrapper[4680]: I0217 18:05:10.938714 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 18:05:10 crc kubenswrapper[4680]: I0217 18:05:10.938903 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 17 18:05:10 crc kubenswrapper[4680]: I0217 18:05:10.949757 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-h6gpf"] Feb 17 18:05:11 crc kubenswrapper[4680]: I0217 18:05:11.114160 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-h6gpf\" (UID: \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\") " pod="openstack/nova-cell1-conductor-db-sync-h6gpf" Feb 17 18:05:11 crc kubenswrapper[4680]: I0217 18:05:11.114536 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4qqx\" (UniqueName: \"kubernetes.io/projected/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-kube-api-access-q4qqx\") pod \"nova-cell1-conductor-db-sync-h6gpf\" (UID: \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\") " pod="openstack/nova-cell1-conductor-db-sync-h6gpf" Feb 17 18:05:11 crc kubenswrapper[4680]: I0217 18:05:11.114596 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-scripts\") pod \"nova-cell1-conductor-db-sync-h6gpf\" (UID: \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\") " pod="openstack/nova-cell1-conductor-db-sync-h6gpf" Feb 17 18:05:11 crc kubenswrapper[4680]: I0217 18:05:11.114682 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-config-data\") pod \"nova-cell1-conductor-db-sync-h6gpf\" (UID: \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\") " pod="openstack/nova-cell1-conductor-db-sync-h6gpf" Feb 17 18:05:11 crc kubenswrapper[4680]: I0217 18:05:11.216684 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-h6gpf\" (UID: \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\") " pod="openstack/nova-cell1-conductor-db-sync-h6gpf" Feb 17 18:05:11 crc kubenswrapper[4680]: I0217 18:05:11.216745 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4qqx\" (UniqueName: \"kubernetes.io/projected/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-kube-api-access-q4qqx\") pod \"nova-cell1-conductor-db-sync-h6gpf\" (UID: \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\") " pod="openstack/nova-cell1-conductor-db-sync-h6gpf" Feb 17 18:05:11 crc kubenswrapper[4680]: I0217 18:05:11.216808 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-scripts\") pod \"nova-cell1-conductor-db-sync-h6gpf\" (UID: \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\") " pod="openstack/nova-cell1-conductor-db-sync-h6gpf" Feb 17 18:05:11 crc kubenswrapper[4680]: I0217 18:05:11.216915 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-config-data\") pod \"nova-cell1-conductor-db-sync-h6gpf\" (UID: \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\") " pod="openstack/nova-cell1-conductor-db-sync-h6gpf" Feb 17 18:05:11 crc kubenswrapper[4680]: I0217 18:05:11.224106 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-scripts\") pod \"nova-cell1-conductor-db-sync-h6gpf\" (UID: \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\") " pod="openstack/nova-cell1-conductor-db-sync-h6gpf" Feb 17 18:05:11 crc kubenswrapper[4680]: I0217 18:05:11.224227 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-h6gpf\" (UID: \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\") " pod="openstack/nova-cell1-conductor-db-sync-h6gpf" Feb 17 18:05:11 crc kubenswrapper[4680]: I0217 18:05:11.224477 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-config-data\") pod \"nova-cell1-conductor-db-sync-h6gpf\" (UID: \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\") " pod="openstack/nova-cell1-conductor-db-sync-h6gpf" Feb 17 18:05:11 crc kubenswrapper[4680]: I0217 18:05:11.265114 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4qqx\" (UniqueName: \"kubernetes.io/projected/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-kube-api-access-q4qqx\") pod \"nova-cell1-conductor-db-sync-h6gpf\" (UID: \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\") " pod="openstack/nova-cell1-conductor-db-sync-h6gpf" Feb 17 18:05:11 crc kubenswrapper[4680]: I0217 18:05:11.357111 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-h6gpf" Feb 17 18:05:11 crc kubenswrapper[4680]: I0217 18:05:11.588460 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5bd2854a-e9a2-41c3-92fc-1be5f01d44af","Type":"ContainerStarted","Data":"ffa8c21faa8eb97eccf646bfac8644d1de23e1b438dd96c73aab80fcd61272ae"} Feb 17 18:05:11 crc kubenswrapper[4680]: I0217 18:05:11.605458 4680 generic.go:334] "Generic (PLEG): container finished" podID="434d2204-587d-4dfe-b473-32490d402e2b" containerID="b8068b3d14f281f98876d3303e53790479ae2e5d358e554b2b4c424c21946540" exitCode=0 Feb 17 18:05:11 crc kubenswrapper[4680]: I0217 18:05:11.605766 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-kgh75" event={"ID":"434d2204-587d-4dfe-b473-32490d402e2b","Type":"ContainerDied","Data":"b8068b3d14f281f98876d3303e53790479ae2e5d358e554b2b4c424c21946540"} Feb 17 18:05:11 crc kubenswrapper[4680]: I0217 18:05:11.605823 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-kgh75" event={"ID":"434d2204-587d-4dfe-b473-32490d402e2b","Type":"ContainerStarted","Data":"7dbb0b678b46b3f23ba9e9c1ebb645fb2ace8ae6e92a2cc97d35f23d76527a60"} Feb 17 18:05:12 crc kubenswrapper[4680]: I0217 18:05:12.098265 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-h6gpf"] Feb 17 18:05:12 crc kubenswrapper[4680]: I0217 18:05:12.648055 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-kgh75" event={"ID":"434d2204-587d-4dfe-b473-32490d402e2b","Type":"ContainerStarted","Data":"48dedf604a5f7d3ea1176543be3ba7bdbc70e4af0ae2b34751c8680b36eb8295"} Feb 17 18:05:12 crc kubenswrapper[4680]: I0217 18:05:12.649644 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:12 crc kubenswrapper[4680]: I0217 18:05:12.668546 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-h6gpf" event={"ID":"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc","Type":"ContainerStarted","Data":"546253378bee7adf3be3a64f591786189d53d7fd0d28df6c1ab890407ae80c8a"} Feb 17 18:05:12 crc kubenswrapper[4680]: I0217 18:05:12.668592 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-h6gpf" event={"ID":"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc","Type":"ContainerStarted","Data":"225332ffa177de418a33b3fd1024b237148d469a7360333091d926a02172dc51"} Feb 17 18:05:12 crc kubenswrapper[4680]: I0217 18:05:12.703034 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-kgh75" podStartSLOduration=3.702923811 podStartE2EDuration="3.702923811s" podCreationTimestamp="2026-02-17 18:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:05:12.683174091 +0000 UTC m=+1242.234072770" watchObservedRunningTime="2026-02-17 18:05:12.702923811 +0000 UTC m=+1242.253822490" Feb 17 18:05:12 crc kubenswrapper[4680]: I0217 18:05:12.719418 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-h6gpf" podStartSLOduration=2.719295234 podStartE2EDuration="2.719295234s" podCreationTimestamp="2026-02-17 18:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:05:12.705593384 +0000 UTC m=+1242.256492063" watchObservedRunningTime="2026-02-17 18:05:12.719295234 +0000 UTC m=+1242.270193913" Feb 17 18:05:13 crc kubenswrapper[4680]: I0217 18:05:13.306186 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 18:05:13 crc kubenswrapper[4680]: I0217 18:05:13.321264 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 18:05:15 crc kubenswrapper[4680]: I0217 18:05:15.727770 4680 generic.go:334] "Generic (PLEG): container finished" podID="d7b7455b-130d-4711-9174-16e352b16455" containerID="5dbc357194ba5ec72add1e74a2d767f31fc9addc3c6a23c67bde601fb330b16d" exitCode=137 Feb 17 18:05:15 crc kubenswrapper[4680]: I0217 18:05:15.727940 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8ccfcccc8-2qwb2" event={"ID":"d7b7455b-130d-4711-9174-16e352b16455","Type":"ContainerDied","Data":"5dbc357194ba5ec72add1e74a2d767f31fc9addc3c6a23c67bde601fb330b16d"} Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.380200 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.490465 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7b7455b-130d-4711-9174-16e352b16455-config-data\") pod \"d7b7455b-130d-4711-9174-16e352b16455\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.490617 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-combined-ca-bundle\") pod \"d7b7455b-130d-4711-9174-16e352b16455\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.490676 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7b7455b-130d-4711-9174-16e352b16455-scripts\") pod \"d7b7455b-130d-4711-9174-16e352b16455\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.490727 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-horizon-secret-key\") pod \"d7b7455b-130d-4711-9174-16e352b16455\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.490796 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7b7455b-130d-4711-9174-16e352b16455-logs\") pod \"d7b7455b-130d-4711-9174-16e352b16455\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.490829 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-horizon-tls-certs\") pod \"d7b7455b-130d-4711-9174-16e352b16455\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.490873 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzj4q\" (UniqueName: \"kubernetes.io/projected/d7b7455b-130d-4711-9174-16e352b16455-kube-api-access-jzj4q\") pod \"d7b7455b-130d-4711-9174-16e352b16455\" (UID: \"d7b7455b-130d-4711-9174-16e352b16455\") " Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.491476 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7b7455b-130d-4711-9174-16e352b16455-logs" (OuterVolumeSpecName: "logs") pod "d7b7455b-130d-4711-9174-16e352b16455" (UID: "d7b7455b-130d-4711-9174-16e352b16455"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.498797 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7b7455b-130d-4711-9174-16e352b16455-kube-api-access-jzj4q" (OuterVolumeSpecName: "kube-api-access-jzj4q") pod "d7b7455b-130d-4711-9174-16e352b16455" (UID: "d7b7455b-130d-4711-9174-16e352b16455"). InnerVolumeSpecName "kube-api-access-jzj4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.505669 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d7b7455b-130d-4711-9174-16e352b16455" (UID: "d7b7455b-130d-4711-9174-16e352b16455"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.593124 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzj4q\" (UniqueName: \"kubernetes.io/projected/d7b7455b-130d-4711-9174-16e352b16455-kube-api-access-jzj4q\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.593447 4680 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.593548 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7b7455b-130d-4711-9174-16e352b16455-logs\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.679144 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b7455b-130d-4711-9174-16e352b16455-config-data" (OuterVolumeSpecName: "config-data") pod "d7b7455b-130d-4711-9174-16e352b16455" (UID: "d7b7455b-130d-4711-9174-16e352b16455"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.690340 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b7455b-130d-4711-9174-16e352b16455-scripts" (OuterVolumeSpecName: "scripts") pod "d7b7455b-130d-4711-9174-16e352b16455" (UID: "d7b7455b-130d-4711-9174-16e352b16455"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.697281 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7b7455b-130d-4711-9174-16e352b16455-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.697331 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7b7455b-130d-4711-9174-16e352b16455-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.706067 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7b7455b-130d-4711-9174-16e352b16455" (UID: "d7b7455b-130d-4711-9174-16e352b16455"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.728609 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "d7b7455b-130d-4711-9174-16e352b16455" (UID: "d7b7455b-130d-4711-9174-16e352b16455"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.753947 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64de592f-9a5f-41c8-8333-66a6173d819d","Type":"ContainerStarted","Data":"870fefeb1cf762eb5d24c8843449d0487e1b58211693ceda4da9098c9d110253"} Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.759341 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"307b6180-63c1-4999-bab9-d3b939f8bddb","Type":"ContainerStarted","Data":"5f1f3e4160a06a7c4643d3a73d4b9c244b22cd12a58c1a877a42f923e5f29db7"} Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.764795 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5bd2854a-e9a2-41c3-92fc-1be5f01d44af","Type":"ContainerStarted","Data":"ebcdc95cd812523e9fa1dc8c9cd53f6dbb38efdce728e8663d559a07fe48a056"} Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.764843 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5bd2854a-e9a2-41c3-92fc-1be5f01d44af","Type":"ContainerStarted","Data":"c95ebdc2817c85633ef631ca27fbd9a435fd838e1456007b2ec0e1e42566b8d3"} Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.764984 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5bd2854a-e9a2-41c3-92fc-1be5f01d44af" containerName="nova-metadata-log" containerID="cri-o://c95ebdc2817c85633ef631ca27fbd9a435fd838e1456007b2ec0e1e42566b8d3" gracePeriod=30 Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.765232 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5bd2854a-e9a2-41c3-92fc-1be5f01d44af" containerName="nova-metadata-metadata" containerID="cri-o://ebcdc95cd812523e9fa1dc8c9cd53f6dbb38efdce728e8663d559a07fe48a056" gracePeriod=30 Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.777057 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a5b3e0bf-c836-4d70-a6a8-16829590ba9c","Type":"ContainerStarted","Data":"bfc024701faed933819be6dc1a433590a0abae932248ca5837911809f1c3e9e6"} Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.777330 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a5b3e0bf-c836-4d70-a6a8-16829590ba9c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://bfc024701faed933819be6dc1a433590a0abae932248ca5837911809f1c3e9e6" gracePeriod=30 Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.777721 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.531143652 podStartE2EDuration="8.777708361s" podCreationTimestamp="2026-02-17 18:05:08 +0000 UTC" firstStartedPulling="2026-02-17 18:05:10.491923535 +0000 UTC m=+1240.042822214" lastFinishedPulling="2026-02-17 18:05:15.738488244 +0000 UTC m=+1245.289386923" observedRunningTime="2026-02-17 18:05:16.77287486 +0000 UTC m=+1246.323773549" watchObservedRunningTime="2026-02-17 18:05:16.777708361 +0000 UTC m=+1246.328607040" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.784995 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8ccfcccc8-2qwb2" event={"ID":"d7b7455b-130d-4711-9174-16e352b16455","Type":"ContainerDied","Data":"0323bb6a2829dff8da034eca9a4d41b1f2869b7eaa6ce03396273b8273424438"} Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.785049 4680 scope.go:117] "RemoveContainer" containerID="bb43b7d8073115a72c22a9eb5e451ca9ee4e1652249db208b2313af6b24dab59" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.785200 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8ccfcccc8-2qwb2" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.798606 4680 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.798630 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b7455b-130d-4711-9174-16e352b16455-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.805743 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.014492884 podStartE2EDuration="8.805721551s" podCreationTimestamp="2026-02-17 18:05:08 +0000 UTC" firstStartedPulling="2026-02-17 18:05:10.938352467 +0000 UTC m=+1240.489251156" lastFinishedPulling="2026-02-17 18:05:15.729581144 +0000 UTC m=+1245.280479823" observedRunningTime="2026-02-17 18:05:16.792846627 +0000 UTC m=+1246.343745316" watchObservedRunningTime="2026-02-17 18:05:16.805721551 +0000 UTC m=+1246.356620240" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.821516 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.239028454 podStartE2EDuration="8.821493205s" podCreationTimestamp="2026-02-17 18:05:08 +0000 UTC" firstStartedPulling="2026-02-17 18:05:10.151590094 +0000 UTC m=+1239.702488773" lastFinishedPulling="2026-02-17 18:05:15.734054845 +0000 UTC m=+1245.284953524" observedRunningTime="2026-02-17 18:05:16.812629378 +0000 UTC m=+1246.363528057" watchObservedRunningTime="2026-02-17 18:05:16.821493205 +0000 UTC m=+1246.372391894" Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.854216 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8ccfcccc8-2qwb2"] Feb 17 18:05:16 crc kubenswrapper[4680]: I0217 18:05:16.865694 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-8ccfcccc8-2qwb2"] Feb 17 18:05:17 crc kubenswrapper[4680]: I0217 18:05:17.030975 4680 scope.go:117] "RemoveContainer" containerID="5dbc357194ba5ec72add1e74a2d767f31fc9addc3c6a23c67bde601fb330b16d" Feb 17 18:05:17 crc kubenswrapper[4680]: I0217 18:05:17.117247 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7b7455b-130d-4711-9174-16e352b16455" path="/var/lib/kubelet/pods/d7b7455b-130d-4711-9174-16e352b16455/volumes" Feb 17 18:05:17 crc kubenswrapper[4680]: I0217 18:05:17.797229 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64de592f-9a5f-41c8-8333-66a6173d819d","Type":"ContainerStarted","Data":"5d82e37032ab20102b1153d9e12ef037ad79a81d52f40f95000664a2dee85b5d"} Feb 17 18:05:17 crc kubenswrapper[4680]: I0217 18:05:17.799194 4680 generic.go:334] "Generic (PLEG): container finished" podID="5bd2854a-e9a2-41c3-92fc-1be5f01d44af" containerID="c95ebdc2817c85633ef631ca27fbd9a435fd838e1456007b2ec0e1e42566b8d3" exitCode=143 Feb 17 18:05:17 crc kubenswrapper[4680]: I0217 18:05:17.799277 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5bd2854a-e9a2-41c3-92fc-1be5f01d44af","Type":"ContainerDied","Data":"c95ebdc2817c85633ef631ca27fbd9a435fd838e1456007b2ec0e1e42566b8d3"} Feb 17 18:05:17 crc kubenswrapper[4680]: I0217 18:05:17.819898 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.304831516 podStartE2EDuration="9.819873761s" podCreationTimestamp="2026-02-17 18:05:08 +0000 UTC" firstStartedPulling="2026-02-17 18:05:10.282156932 +0000 UTC m=+1239.833055611" lastFinishedPulling="2026-02-17 18:05:15.797199177 +0000 UTC m=+1245.348097856" observedRunningTime="2026-02-17 18:05:17.814179802 +0000 UTC m=+1247.365078481" watchObservedRunningTime="2026-02-17 18:05:17.819873761 +0000 UTC m=+1247.370772440" Feb 17 18:05:18 crc kubenswrapper[4680]: I0217 18:05:18.727825 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 18:05:18 crc kubenswrapper[4680]: I0217 18:05:18.728063 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6b425c86-a83e-4258-8432-83308132d038" containerName="kube-state-metrics" containerID="cri-o://c3bcd130e4f051598ae9e1882af878bb4158e0d7273e2021a622635c0f4b98f6" gracePeriod=30 Feb 17 18:05:18 crc kubenswrapper[4680]: I0217 18:05:18.988359 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.161035 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.161084 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.292436 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.311126 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.311513 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.463203 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grhpz\" (UniqueName: \"kubernetes.io/projected/6b425c86-a83e-4258-8432-83308132d038-kube-api-access-grhpz\") pod \"6b425c86-a83e-4258-8432-83308132d038\" (UID: \"6b425c86-a83e-4258-8432-83308132d038\") " Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.499638 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b425c86-a83e-4258-8432-83308132d038-kube-api-access-grhpz" (OuterVolumeSpecName: "kube-api-access-grhpz") pod "6b425c86-a83e-4258-8432-83308132d038" (UID: "6b425c86-a83e-4258-8432-83308132d038"). InnerVolumeSpecName "kube-api-access-grhpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.532462 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.568623 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grhpz\" (UniqueName: \"kubernetes.io/projected/6b425c86-a83e-4258-8432-83308132d038-kube-api-access-grhpz\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.589748 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.664381 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bdq9p"] Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.664426 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.664595 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.664886 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" podUID="2625239a-cd47-4fd7-9ad2-f87fb89bfc40" containerName="dnsmasq-dns" containerID="cri-o://b4181bbedd1ad100016664e6d6703d38f7b267ebf6ff27fea8eeb16d34c479e1" gracePeriod=10 Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.833574 4680 generic.go:334] "Generic (PLEG): container finished" podID="2625239a-cd47-4fd7-9ad2-f87fb89bfc40" containerID="b4181bbedd1ad100016664e6d6703d38f7b267ebf6ff27fea8eeb16d34c479e1" exitCode=0 Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.833636 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" event={"ID":"2625239a-cd47-4fd7-9ad2-f87fb89bfc40","Type":"ContainerDied","Data":"b4181bbedd1ad100016664e6d6703d38f7b267ebf6ff27fea8eeb16d34c479e1"} Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.846225 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.847031 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b425c86-a83e-4258-8432-83308132d038","Type":"ContainerDied","Data":"c3bcd130e4f051598ae9e1882af878bb4158e0d7273e2021a622635c0f4b98f6"} Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.847109 4680 scope.go:117] "RemoveContainer" containerID="c3bcd130e4f051598ae9e1882af878bb4158e0d7273e2021a622635c0f4b98f6" Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.851398 4680 generic.go:334] "Generic (PLEG): container finished" podID="6b425c86-a83e-4258-8432-83308132d038" containerID="c3bcd130e4f051598ae9e1882af878bb4158e0d7273e2021a622635c0f4b98f6" exitCode=2 Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.851833 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b425c86-a83e-4258-8432-83308132d038","Type":"ContainerDied","Data":"529bcfeee2a08fcbc70f19c1b54859239e92ec6e7312ab09cbf3fc893dae4762"} Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.935357 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.956402 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.977009 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.985550 4680 scope.go:117] "RemoveContainer" containerID="c3bcd130e4f051598ae9e1882af878bb4158e0d7273e2021a622635c0f4b98f6" Feb 17 18:05:19 crc kubenswrapper[4680]: E0217 18:05:19.988792 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3bcd130e4f051598ae9e1882af878bb4158e0d7273e2021a622635c0f4b98f6\": container with ID starting with c3bcd130e4f051598ae9e1882af878bb4158e0d7273e2021a622635c0f4b98f6 not found: ID does not exist" containerID="c3bcd130e4f051598ae9e1882af878bb4158e0d7273e2021a622635c0f4b98f6" Feb 17 18:05:19 crc kubenswrapper[4680]: I0217 18:05:19.988838 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3bcd130e4f051598ae9e1882af878bb4158e0d7273e2021a622635c0f4b98f6"} err="failed to get container status \"c3bcd130e4f051598ae9e1882af878bb4158e0d7273e2021a622635c0f4b98f6\": rpc error: code = NotFound desc = could not find container \"c3bcd130e4f051598ae9e1882af878bb4158e0d7273e2021a622635c0f4b98f6\": container with ID starting with c3bcd130e4f051598ae9e1882af878bb4158e0d7273e2021a622635c0f4b98f6 not found: ID does not exist" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.006373 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 18:05:20 crc kubenswrapper[4680]: E0217 18:05:20.006878 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.006897 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon" Feb 17 18:05:20 crc kubenswrapper[4680]: E0217 18:05:20.006914 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b425c86-a83e-4258-8432-83308132d038" containerName="kube-state-metrics" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.006923 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b425c86-a83e-4258-8432-83308132d038" containerName="kube-state-metrics" Feb 17 18:05:20 crc kubenswrapper[4680]: E0217 18:05:20.006937 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon-log" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.006945 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon-log" Feb 17 18:05:20 crc kubenswrapper[4680]: E0217 18:05:20.006953 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.006959 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.007178 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.007195 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon-log" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.007206 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b425c86-a83e-4258-8432-83308132d038" containerName="kube-state-metrics" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.007217 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b7455b-130d-4711-9174-16e352b16455" containerName="horizon" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.007966 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.010920 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.011164 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.018396 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.192411 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45b4ecc2-2420-49af-9969-0799eda9ee0f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"45b4ecc2-2420-49af-9969-0799eda9ee0f\") " pod="openstack/kube-state-metrics-0" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.192739 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt46x\" (UniqueName: \"kubernetes.io/projected/45b4ecc2-2420-49af-9969-0799eda9ee0f-kube-api-access-pt46x\") pod \"kube-state-metrics-0\" (UID: \"45b4ecc2-2420-49af-9969-0799eda9ee0f\") " pod="openstack/kube-state-metrics-0" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.192780 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/45b4ecc2-2420-49af-9969-0799eda9ee0f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"45b4ecc2-2420-49af-9969-0799eda9ee0f\") " pod="openstack/kube-state-metrics-0" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.192978 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/45b4ecc2-2420-49af-9969-0799eda9ee0f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"45b4ecc2-2420-49af-9969-0799eda9ee0f\") " pod="openstack/kube-state-metrics-0" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.246622 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="64de592f-9a5f-41c8-8333-66a6173d819d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.246879 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="64de592f-9a5f-41c8-8333-66a6173d819d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.304832 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt46x\" (UniqueName: \"kubernetes.io/projected/45b4ecc2-2420-49af-9969-0799eda9ee0f-kube-api-access-pt46x\") pod \"kube-state-metrics-0\" (UID: \"45b4ecc2-2420-49af-9969-0799eda9ee0f\") " pod="openstack/kube-state-metrics-0" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.304928 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/45b4ecc2-2420-49af-9969-0799eda9ee0f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"45b4ecc2-2420-49af-9969-0799eda9ee0f\") " pod="openstack/kube-state-metrics-0" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.305045 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/45b4ecc2-2420-49af-9969-0799eda9ee0f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"45b4ecc2-2420-49af-9969-0799eda9ee0f\") " pod="openstack/kube-state-metrics-0" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.305202 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45b4ecc2-2420-49af-9969-0799eda9ee0f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"45b4ecc2-2420-49af-9969-0799eda9ee0f\") " pod="openstack/kube-state-metrics-0" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.315012 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/45b4ecc2-2420-49af-9969-0799eda9ee0f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"45b4ecc2-2420-49af-9969-0799eda9ee0f\") " pod="openstack/kube-state-metrics-0" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.315190 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/45b4ecc2-2420-49af-9969-0799eda9ee0f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"45b4ecc2-2420-49af-9969-0799eda9ee0f\") " pod="openstack/kube-state-metrics-0" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.332766 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45b4ecc2-2420-49af-9969-0799eda9ee0f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"45b4ecc2-2420-49af-9969-0799eda9ee0f\") " pod="openstack/kube-state-metrics-0" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.348160 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt46x\" (UniqueName: \"kubernetes.io/projected/45b4ecc2-2420-49af-9969-0799eda9ee0f-kube-api-access-pt46x\") pod \"kube-state-metrics-0\" (UID: \"45b4ecc2-2420-49af-9969-0799eda9ee0f\") " pod="openstack/kube-state-metrics-0" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.431873 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.611234 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-ovsdbserver-sb\") pod \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.611374 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-dns-svc\") pod \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.611479 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-ovsdbserver-nb\") pod \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.611530 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-config\") pod \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.611585 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsvbt\" (UniqueName: \"kubernetes.io/projected/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-kube-api-access-tsvbt\") pod \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.611637 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-dns-swift-storage-0\") pod \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\" (UID: \"2625239a-cd47-4fd7-9ad2-f87fb89bfc40\") " Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.642747 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.684194 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-kube-api-access-tsvbt" (OuterVolumeSpecName: "kube-api-access-tsvbt") pod "2625239a-cd47-4fd7-9ad2-f87fb89bfc40" (UID: "2625239a-cd47-4fd7-9ad2-f87fb89bfc40"). InnerVolumeSpecName "kube-api-access-tsvbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.717311 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsvbt\" (UniqueName: \"kubernetes.io/projected/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-kube-api-access-tsvbt\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.764685 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2625239a-cd47-4fd7-9ad2-f87fb89bfc40" (UID: "2625239a-cd47-4fd7-9ad2-f87fb89bfc40"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.801832 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-config" (OuterVolumeSpecName: "config") pod "2625239a-cd47-4fd7-9ad2-f87fb89bfc40" (UID: "2625239a-cd47-4fd7-9ad2-f87fb89bfc40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.821509 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.821545 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.832392 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2625239a-cd47-4fd7-9ad2-f87fb89bfc40" (UID: "2625239a-cd47-4fd7-9ad2-f87fb89bfc40"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.834476 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2625239a-cd47-4fd7-9ad2-f87fb89bfc40" (UID: "2625239a-cd47-4fd7-9ad2-f87fb89bfc40"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.857834 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2625239a-cd47-4fd7-9ad2-f87fb89bfc40" (UID: "2625239a-cd47-4fd7-9ad2-f87fb89bfc40"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.868872 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" event={"ID":"2625239a-cd47-4fd7-9ad2-f87fb89bfc40","Type":"ContainerDied","Data":"a5991b40623aa0990b0fbdcfbd1b1be56ea1ca992bfa9decaac86707345c7b5c"} Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.868930 4680 scope.go:117] "RemoveContainer" containerID="b4181bbedd1ad100016664e6d6703d38f7b267ebf6ff27fea8eeb16d34c479e1" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.868991 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-bdq9p" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.913391 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bdq9p"] Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.921931 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bdq9p"] Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.922856 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.922886 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.922898 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2625239a-cd47-4fd7-9ad2-f87fb89bfc40-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:20 crc kubenswrapper[4680]: I0217 18:05:20.960000 4680 scope.go:117] "RemoveContainer" containerID="4406fedb6452f19643ee69e390092b99be5f48b4f8518d5e091a003b4ea00e6f" Feb 17 18:05:21 crc kubenswrapper[4680]: I0217 18:05:21.174217 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2625239a-cd47-4fd7-9ad2-f87fb89bfc40" path="/var/lib/kubelet/pods/2625239a-cd47-4fd7-9ad2-f87fb89bfc40/volumes" Feb 17 18:05:21 crc kubenswrapper[4680]: I0217 18:05:21.175289 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b425c86-a83e-4258-8432-83308132d038" path="/var/lib/kubelet/pods/6b425c86-a83e-4258-8432-83308132d038/volumes" Feb 17 18:05:21 crc kubenswrapper[4680]: W0217 18:05:21.405884 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45b4ecc2_2420_49af_9969_0799eda9ee0f.slice/crio-2667feb51060aa24dc3687cf29d7b46d22009ab48f9090da72cf8521498e84d7 WatchSource:0}: Error finding container 2667feb51060aa24dc3687cf29d7b46d22009ab48f9090da72cf8521498e84d7: Status 404 returned error can't find the container with id 2667feb51060aa24dc3687cf29d7b46d22009ab48f9090da72cf8521498e84d7 Feb 17 18:05:21 crc kubenswrapper[4680]: I0217 18:05:21.408710 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 18:05:21 crc kubenswrapper[4680]: I0217 18:05:21.408944 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 18:05:21 crc kubenswrapper[4680]: I0217 18:05:21.891215 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"45b4ecc2-2420-49af-9969-0799eda9ee0f","Type":"ContainerStarted","Data":"2667feb51060aa24dc3687cf29d7b46d22009ab48f9090da72cf8521498e84d7"} Feb 17 18:05:22 crc kubenswrapper[4680]: I0217 18:05:22.940773 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"45b4ecc2-2420-49af-9969-0799eda9ee0f","Type":"ContainerStarted","Data":"b947638a072815e8786c468a20d3699849b67670ca970515a690d62450874e75"} Feb 17 18:05:22 crc kubenswrapper[4680]: I0217 18:05:22.941409 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 18:05:22 crc kubenswrapper[4680]: I0217 18:05:22.974344 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.460327737 podStartE2EDuration="3.974309619s" podCreationTimestamp="2026-02-17 18:05:19 +0000 UTC" firstStartedPulling="2026-02-17 18:05:21.40870366 +0000 UTC m=+1250.959602339" lastFinishedPulling="2026-02-17 18:05:21.922685542 +0000 UTC m=+1251.473584221" observedRunningTime="2026-02-17 18:05:22.971365886 +0000 UTC m=+1252.522264575" watchObservedRunningTime="2026-02-17 18:05:22.974309619 +0000 UTC m=+1252.525208298" Feb 17 18:05:23 crc kubenswrapper[4680]: I0217 18:05:23.531243 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:05:23 crc kubenswrapper[4680]: I0217 18:05:23.533228 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerName="ceilometer-central-agent" containerID="cri-o://a2af538a802796565d56139ed42bbdebee7c7b1753cd0acf87fb8d6315680b75" gracePeriod=30 Feb 17 18:05:23 crc kubenswrapper[4680]: I0217 18:05:23.533265 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerName="proxy-httpd" containerID="cri-o://586f0d7f042acb17edb937ee4f756bd1ee3e2eb1d8cefee429b8886ea30f1181" gracePeriod=30 Feb 17 18:05:23 crc kubenswrapper[4680]: I0217 18:05:23.533347 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerName="ceilometer-notification-agent" containerID="cri-o://730cef110c72aba156a961929069c47a1b2ccfb44284322ec8d4942e9b0a46e1" gracePeriod=30 Feb 17 18:05:23 crc kubenswrapper[4680]: I0217 18:05:23.533301 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerName="sg-core" containerID="cri-o://6cad941674f92ec56f7cdd2ada09280e89999c18f0d1b25b0f13d6c64c9b9b04" gracePeriod=30 Feb 17 18:05:23 crc kubenswrapper[4680]: I0217 18:05:23.953725 4680 generic.go:334] "Generic (PLEG): container finished" podID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerID="586f0d7f042acb17edb937ee4f756bd1ee3e2eb1d8cefee429b8886ea30f1181" exitCode=0 Feb 17 18:05:23 crc kubenswrapper[4680]: I0217 18:05:23.953761 4680 generic.go:334] "Generic (PLEG): container finished" podID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerID="6cad941674f92ec56f7cdd2ada09280e89999c18f0d1b25b0f13d6c64c9b9b04" exitCode=2 Feb 17 18:05:23 crc kubenswrapper[4680]: I0217 18:05:23.954849 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0","Type":"ContainerDied","Data":"586f0d7f042acb17edb937ee4f756bd1ee3e2eb1d8cefee429b8886ea30f1181"} Feb 17 18:05:23 crc kubenswrapper[4680]: I0217 18:05:23.954879 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0","Type":"ContainerDied","Data":"6cad941674f92ec56f7cdd2ada09280e89999c18f0d1b25b0f13d6c64c9b9b04"} Feb 17 18:05:24 crc kubenswrapper[4680]: I0217 18:05:24.967041 4680 generic.go:334] "Generic (PLEG): container finished" podID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerID="a2af538a802796565d56139ed42bbdebee7c7b1753cd0acf87fb8d6315680b75" exitCode=0 Feb 17 18:05:24 crc kubenswrapper[4680]: I0217 18:05:24.967378 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0","Type":"ContainerDied","Data":"a2af538a802796565d56139ed42bbdebee7c7b1753cd0acf87fb8d6315680b75"} Feb 17 18:05:26 crc kubenswrapper[4680]: I0217 18:05:26.987778 4680 generic.go:334] "Generic (PLEG): container finished" podID="96e9fe6c-a137-4f3a-8ff4-3f1678a234cc" containerID="546253378bee7adf3be3a64f591786189d53d7fd0d28df6c1ab890407ae80c8a" exitCode=0 Feb 17 18:05:26 crc kubenswrapper[4680]: I0217 18:05:26.987892 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-h6gpf" event={"ID":"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc","Type":"ContainerDied","Data":"546253378bee7adf3be3a64f591786189d53d7fd0d28df6c1ab890407ae80c8a"} Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.531500 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.710989 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-run-httpd\") pod \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.711061 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-combined-ca-bundle\") pod \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.711088 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-sg-core-conf-yaml\") pod \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.711201 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-config-data\") pod \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.711310 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-scripts\") pod \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.711388 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-log-httpd\") pod \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.711428 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6snf6\" (UniqueName: \"kubernetes.io/projected/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-kube-api-access-6snf6\") pod \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\" (UID: \"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0\") " Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.713728 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" (UID: "e3ce664d-c8a3-4f78-9e02-f3a9034f45c0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.714233 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" (UID: "e3ce664d-c8a3-4f78-9e02-f3a9034f45c0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.720857 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-scripts" (OuterVolumeSpecName: "scripts") pod "e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" (UID: "e3ce664d-c8a3-4f78-9e02-f3a9034f45c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.721161 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-kube-api-access-6snf6" (OuterVolumeSpecName: "kube-api-access-6snf6") pod "e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" (UID: "e3ce664d-c8a3-4f78-9e02-f3a9034f45c0"). InnerVolumeSpecName "kube-api-access-6snf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.745194 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" (UID: "e3ce664d-c8a3-4f78-9e02-f3a9034f45c0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.800454 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" (UID: "e3ce664d-c8a3-4f78-9e02-f3a9034f45c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.814155 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.814180 4680 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.814192 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6snf6\" (UniqueName: \"kubernetes.io/projected/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-kube-api-access-6snf6\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.814201 4680 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.814209 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.814217 4680 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.821665 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-config-data" (OuterVolumeSpecName: "config-data") pod "e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" (UID: "e3ce664d-c8a3-4f78-9e02-f3a9034f45c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:27 crc kubenswrapper[4680]: I0217 18:05:27.915638 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.001506 4680 generic.go:334] "Generic (PLEG): container finished" podID="4eea2a6c-918d-47a3-b782-9a0f035b6bdc" containerID="6486b7cb7c1814dd6bb8fa0855f1fe117cdff4dfc777924da50e61487487e818" exitCode=0 Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.001580 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wpttw" event={"ID":"4eea2a6c-918d-47a3-b782-9a0f035b6bdc","Type":"ContainerDied","Data":"6486b7cb7c1814dd6bb8fa0855f1fe117cdff4dfc777924da50e61487487e818"} Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.005189 4680 generic.go:334] "Generic (PLEG): container finished" podID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerID="730cef110c72aba156a961929069c47a1b2ccfb44284322ec8d4942e9b0a46e1" exitCode=0 Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.005217 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0","Type":"ContainerDied","Data":"730cef110c72aba156a961929069c47a1b2ccfb44284322ec8d4942e9b0a46e1"} Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.005272 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3ce664d-c8a3-4f78-9e02-f3a9034f45c0","Type":"ContainerDied","Data":"1c4f6a7ad8f064ae0cb5b333d61a40c4b1063556a4cfd2c352d8c116205fd1dc"} Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.005289 4680 scope.go:117] "RemoveContainer" containerID="586f0d7f042acb17edb937ee4f756bd1ee3e2eb1d8cefee429b8886ea30f1181" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.005573 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.043182 4680 scope.go:117] "RemoveContainer" containerID="6cad941674f92ec56f7cdd2ada09280e89999c18f0d1b25b0f13d6c64c9b9b04" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.061426 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.067297 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.090083 4680 scope.go:117] "RemoveContainer" containerID="730cef110c72aba156a961929069c47a1b2ccfb44284322ec8d4942e9b0a46e1" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.090560 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:05:28 crc kubenswrapper[4680]: E0217 18:05:28.090909 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerName="ceilometer-central-agent" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.090926 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerName="ceilometer-central-agent" Feb 17 18:05:28 crc kubenswrapper[4680]: E0217 18:05:28.090940 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2625239a-cd47-4fd7-9ad2-f87fb89bfc40" containerName="dnsmasq-dns" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.090946 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2625239a-cd47-4fd7-9ad2-f87fb89bfc40" containerName="dnsmasq-dns" Feb 17 18:05:28 crc kubenswrapper[4680]: E0217 18:05:28.090975 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerName="ceilometer-notification-agent" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.090981 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerName="ceilometer-notification-agent" Feb 17 18:05:28 crc kubenswrapper[4680]: E0217 18:05:28.090992 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerName="sg-core" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.090998 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerName="sg-core" Feb 17 18:05:28 crc kubenswrapper[4680]: E0217 18:05:28.091013 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerName="proxy-httpd" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.091019 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerName="proxy-httpd" Feb 17 18:05:28 crc kubenswrapper[4680]: E0217 18:05:28.091026 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2625239a-cd47-4fd7-9ad2-f87fb89bfc40" containerName="init" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.091032 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2625239a-cd47-4fd7-9ad2-f87fb89bfc40" containerName="init" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.091206 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2625239a-cd47-4fd7-9ad2-f87fb89bfc40" containerName="dnsmasq-dns" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.091222 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerName="proxy-httpd" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.091232 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerName="ceilometer-notification-agent" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.091245 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerName="sg-core" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.091258 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" containerName="ceilometer-central-agent" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.099843 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.105836 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.106941 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.109239 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.116164 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.131108 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.131225 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-scripts\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.131253 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72189e3b-d3ab-48b3-a4e8-2a6109e97409-run-httpd\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.131312 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.131344 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-config-data\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.131366 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.131394 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgdwv\" (UniqueName: \"kubernetes.io/projected/72189e3b-d3ab-48b3-a4e8-2a6109e97409-kube-api-access-jgdwv\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.131413 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72189e3b-d3ab-48b3-a4e8-2a6109e97409-log-httpd\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.223929 4680 scope.go:117] "RemoveContainer" containerID="a2af538a802796565d56139ed42bbdebee7c7b1753cd0acf87fb8d6315680b75" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.255048 4680 scope.go:117] "RemoveContainer" containerID="586f0d7f042acb17edb937ee4f756bd1ee3e2eb1d8cefee429b8886ea30f1181" Feb 17 18:05:28 crc kubenswrapper[4680]: E0217 18:05:28.257256 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"586f0d7f042acb17edb937ee4f756bd1ee3e2eb1d8cefee429b8886ea30f1181\": container with ID starting with 586f0d7f042acb17edb937ee4f756bd1ee3e2eb1d8cefee429b8886ea30f1181 not found: ID does not exist" containerID="586f0d7f042acb17edb937ee4f756bd1ee3e2eb1d8cefee429b8886ea30f1181" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.257332 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"586f0d7f042acb17edb937ee4f756bd1ee3e2eb1d8cefee429b8886ea30f1181"} err="failed to get container status \"586f0d7f042acb17edb937ee4f756bd1ee3e2eb1d8cefee429b8886ea30f1181\": rpc error: code = NotFound desc = could not find container \"586f0d7f042acb17edb937ee4f756bd1ee3e2eb1d8cefee429b8886ea30f1181\": container with ID starting with 586f0d7f042acb17edb937ee4f756bd1ee3e2eb1d8cefee429b8886ea30f1181 not found: ID does not exist" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.257368 4680 scope.go:117] "RemoveContainer" containerID="6cad941674f92ec56f7cdd2ada09280e89999c18f0d1b25b0f13d6c64c9b9b04" Feb 17 18:05:28 crc kubenswrapper[4680]: E0217 18:05:28.259282 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cad941674f92ec56f7cdd2ada09280e89999c18f0d1b25b0f13d6c64c9b9b04\": container with ID starting with 6cad941674f92ec56f7cdd2ada09280e89999c18f0d1b25b0f13d6c64c9b9b04 not found: ID does not exist" containerID="6cad941674f92ec56f7cdd2ada09280e89999c18f0d1b25b0f13d6c64c9b9b04" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.259445 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cad941674f92ec56f7cdd2ada09280e89999c18f0d1b25b0f13d6c64c9b9b04"} err="failed to get container status \"6cad941674f92ec56f7cdd2ada09280e89999c18f0d1b25b0f13d6c64c9b9b04\": rpc error: code = NotFound desc = could not find container \"6cad941674f92ec56f7cdd2ada09280e89999c18f0d1b25b0f13d6c64c9b9b04\": container with ID starting with 6cad941674f92ec56f7cdd2ada09280e89999c18f0d1b25b0f13d6c64c9b9b04 not found: ID does not exist" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.259505 4680 scope.go:117] "RemoveContainer" containerID="730cef110c72aba156a961929069c47a1b2ccfb44284322ec8d4942e9b0a46e1" Feb 17 18:05:28 crc kubenswrapper[4680]: E0217 18:05:28.260002 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"730cef110c72aba156a961929069c47a1b2ccfb44284322ec8d4942e9b0a46e1\": container with ID starting with 730cef110c72aba156a961929069c47a1b2ccfb44284322ec8d4942e9b0a46e1 not found: ID does not exist" containerID="730cef110c72aba156a961929069c47a1b2ccfb44284322ec8d4942e9b0a46e1" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.260061 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"730cef110c72aba156a961929069c47a1b2ccfb44284322ec8d4942e9b0a46e1"} err="failed to get container status \"730cef110c72aba156a961929069c47a1b2ccfb44284322ec8d4942e9b0a46e1\": rpc error: code = NotFound desc = could not find container \"730cef110c72aba156a961929069c47a1b2ccfb44284322ec8d4942e9b0a46e1\": container with ID starting with 730cef110c72aba156a961929069c47a1b2ccfb44284322ec8d4942e9b0a46e1 not found: ID does not exist" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.260091 4680 scope.go:117] "RemoveContainer" containerID="a2af538a802796565d56139ed42bbdebee7c7b1753cd0acf87fb8d6315680b75" Feb 17 18:05:28 crc kubenswrapper[4680]: E0217 18:05:28.260397 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2af538a802796565d56139ed42bbdebee7c7b1753cd0acf87fb8d6315680b75\": container with ID starting with a2af538a802796565d56139ed42bbdebee7c7b1753cd0acf87fb8d6315680b75 not found: ID does not exist" containerID="a2af538a802796565d56139ed42bbdebee7c7b1753cd0acf87fb8d6315680b75" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.260418 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2af538a802796565d56139ed42bbdebee7c7b1753cd0acf87fb8d6315680b75"} err="failed to get container status \"a2af538a802796565d56139ed42bbdebee7c7b1753cd0acf87fb8d6315680b75\": rpc error: code = NotFound desc = could not find container \"a2af538a802796565d56139ed42bbdebee7c7b1753cd0acf87fb8d6315680b75\": container with ID starting with a2af538a802796565d56139ed42bbdebee7c7b1753cd0acf87fb8d6315680b75 not found: ID does not exist" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.280269 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.280718 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-config-data\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.280850 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.281006 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgdwv\" (UniqueName: \"kubernetes.io/projected/72189e3b-d3ab-48b3-a4e8-2a6109e97409-kube-api-access-jgdwv\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.281193 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72189e3b-d3ab-48b3-a4e8-2a6109e97409-log-httpd\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.281478 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.281889 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-scripts\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.282662 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72189e3b-d3ab-48b3-a4e8-2a6109e97409-log-httpd\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.293571 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72189e3b-d3ab-48b3-a4e8-2a6109e97409-run-httpd\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.295838 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.297678 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72189e3b-d3ab-48b3-a4e8-2a6109e97409-run-httpd\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.298575 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-config-data\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.298870 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-scripts\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.299885 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.302797 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.307192 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgdwv\" (UniqueName: \"kubernetes.io/projected/72189e3b-d3ab-48b3-a4e8-2a6109e97409-kube-api-access-jgdwv\") pod \"ceilometer-0\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.483241 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.505721 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-h6gpf" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.599484 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-combined-ca-bundle\") pod \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\" (UID: \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\") " Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.599994 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-scripts\") pod \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\" (UID: \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\") " Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.600075 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-config-data\") pod \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\" (UID: \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\") " Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.600154 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4qqx\" (UniqueName: \"kubernetes.io/projected/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-kube-api-access-q4qqx\") pod \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\" (UID: \"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc\") " Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.604169 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-scripts" (OuterVolumeSpecName: "scripts") pod "96e9fe6c-a137-4f3a-8ff4-3f1678a234cc" (UID: "96e9fe6c-a137-4f3a-8ff4-3f1678a234cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.604788 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-kube-api-access-q4qqx" (OuterVolumeSpecName: "kube-api-access-q4qqx") pod "96e9fe6c-a137-4f3a-8ff4-3f1678a234cc" (UID: "96e9fe6c-a137-4f3a-8ff4-3f1678a234cc"). InnerVolumeSpecName "kube-api-access-q4qqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.702080 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-config-data" (OuterVolumeSpecName: "config-data") pod "96e9fe6c-a137-4f3a-8ff4-3f1678a234cc" (UID: "96e9fe6c-a137-4f3a-8ff4-3f1678a234cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.703965 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.703991 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.704002 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4qqx\" (UniqueName: \"kubernetes.io/projected/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-kube-api-access-q4qqx\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.704088 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96e9fe6c-a137-4f3a-8ff4-3f1678a234cc" (UID: "96e9fe6c-a137-4f3a-8ff4-3f1678a234cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:28 crc kubenswrapper[4680]: I0217 18:05:28.806500 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.022215 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-h6gpf" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.022203 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-h6gpf" event={"ID":"96e9fe6c-a137-4f3a-8ff4-3f1678a234cc","Type":"ContainerDied","Data":"225332ffa177de418a33b3fd1024b237148d469a7360333091d926a02172dc51"} Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.022257 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="225332ffa177de418a33b3fd1024b237148d469a7360333091d926a02172dc51" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.094276 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.104023 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 18:05:29 crc kubenswrapper[4680]: E0217 18:05:29.104600 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e9fe6c-a137-4f3a-8ff4-3f1678a234cc" containerName="nova-cell1-conductor-db-sync" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.104689 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e9fe6c-a137-4f3a-8ff4-3f1678a234cc" containerName="nova-cell1-conductor-db-sync" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.104990 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e9fe6c-a137-4f3a-8ff4-3f1678a234cc" containerName="nova-cell1-conductor-db-sync" Feb 17 18:05:29 crc kubenswrapper[4680]: W0217 18:05:29.108162 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72189e3b_d3ab_48b3_a4e8_2a6109e97409.slice/crio-875a2ccaaabc4d457f8aafb42a8e0da7aeef3421142c926052ca334ebaea3286 WatchSource:0}: Error finding container 875a2ccaaabc4d457f8aafb42a8e0da7aeef3421142c926052ca334ebaea3286: Status 404 returned error can't find the container with id 875a2ccaaabc4d457f8aafb42a8e0da7aeef3421142c926052ca334ebaea3286 Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.112126 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.119942 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.156705 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3ce664d-c8a3-4f78-9e02-f3a9034f45c0" path="/var/lib/kubelet/pods/e3ce664d-c8a3-4f78-9e02-f3a9034f45c0/volumes" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.158101 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.196094 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.197386 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.213809 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aab84dd-8ed8-46fd-931a-31759485bb02-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2aab84dd-8ed8-46fd-931a-31759485bb02\") " pod="openstack/nova-cell1-conductor-0" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.213859 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aab84dd-8ed8-46fd-931a-31759485bb02-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2aab84dd-8ed8-46fd-931a-31759485bb02\") " pod="openstack/nova-cell1-conductor-0" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.213882 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn64z\" (UniqueName: \"kubernetes.io/projected/2aab84dd-8ed8-46fd-931a-31759485bb02-kube-api-access-vn64z\") pod \"nova-cell1-conductor-0\" (UID: \"2aab84dd-8ed8-46fd-931a-31759485bb02\") " pod="openstack/nova-cell1-conductor-0" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.223163 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.234149 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.317515 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn64z\" (UniqueName: \"kubernetes.io/projected/2aab84dd-8ed8-46fd-931a-31759485bb02-kube-api-access-vn64z\") pod \"nova-cell1-conductor-0\" (UID: \"2aab84dd-8ed8-46fd-931a-31759485bb02\") " pod="openstack/nova-cell1-conductor-0" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.317814 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aab84dd-8ed8-46fd-931a-31759485bb02-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2aab84dd-8ed8-46fd-931a-31759485bb02\") " pod="openstack/nova-cell1-conductor-0" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.317838 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aab84dd-8ed8-46fd-931a-31759485bb02-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2aab84dd-8ed8-46fd-931a-31759485bb02\") " pod="openstack/nova-cell1-conductor-0" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.325296 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aab84dd-8ed8-46fd-931a-31759485bb02-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2aab84dd-8ed8-46fd-931a-31759485bb02\") " pod="openstack/nova-cell1-conductor-0" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.327928 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aab84dd-8ed8-46fd-931a-31759485bb02-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2aab84dd-8ed8-46fd-931a-31759485bb02\") " pod="openstack/nova-cell1-conductor-0" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.346133 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn64z\" (UniqueName: \"kubernetes.io/projected/2aab84dd-8ed8-46fd-931a-31759485bb02-kube-api-access-vn64z\") pod \"nova-cell1-conductor-0\" (UID: \"2aab84dd-8ed8-46fd-931a-31759485bb02\") " pod="openstack/nova-cell1-conductor-0" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.436775 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.533942 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wpttw" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.627040 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-combined-ca-bundle\") pod \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\" (UID: \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\") " Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.627406 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-config-data\") pod \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\" (UID: \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\") " Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.627467 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk9fd\" (UniqueName: \"kubernetes.io/projected/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-kube-api-access-qk9fd\") pod \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\" (UID: \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\") " Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.627488 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-scripts\") pod \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\" (UID: \"4eea2a6c-918d-47a3-b782-9a0f035b6bdc\") " Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.632712 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-scripts" (OuterVolumeSpecName: "scripts") pod "4eea2a6c-918d-47a3-b782-9a0f035b6bdc" (UID: "4eea2a6c-918d-47a3-b782-9a0f035b6bdc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.633373 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-kube-api-access-qk9fd" (OuterVolumeSpecName: "kube-api-access-qk9fd") pod "4eea2a6c-918d-47a3-b782-9a0f035b6bdc" (UID: "4eea2a6c-918d-47a3-b782-9a0f035b6bdc"). InnerVolumeSpecName "kube-api-access-qk9fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.670463 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4eea2a6c-918d-47a3-b782-9a0f035b6bdc" (UID: "4eea2a6c-918d-47a3-b782-9a0f035b6bdc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.681252 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-config-data" (OuterVolumeSpecName: "config-data") pod "4eea2a6c-918d-47a3-b782-9a0f035b6bdc" (UID: "4eea2a6c-918d-47a3-b782-9a0f035b6bdc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.734897 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk9fd\" (UniqueName: \"kubernetes.io/projected/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-kube-api-access-qk9fd\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.734928 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.734937 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.734947 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eea2a6c-918d-47a3-b782-9a0f035b6bdc-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:29 crc kubenswrapper[4680]: I0217 18:05:29.970289 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.081744 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wpttw" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.081820 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wpttw" event={"ID":"4eea2a6c-918d-47a3-b782-9a0f035b6bdc","Type":"ContainerDied","Data":"24b6190d4a20dd24123c5fd86ae731ee924f66ecf3f4575606e545cdfcb9fb38"} Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.082263 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24b6190d4a20dd24123c5fd86ae731ee924f66ecf3f4575606e545cdfcb9fb38" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.090549 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2aab84dd-8ed8-46fd-931a-31759485bb02","Type":"ContainerStarted","Data":"7c7657f384c798480630ece24cf52387e9964674eac4fc6b7108f883556e8a40"} Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.096698 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72189e3b-d3ab-48b3-a4e8-2a6109e97409","Type":"ContainerStarted","Data":"5e5d1f2959951a5dc6e4eea7b27814a5fa0d394a43aa60b119d1e7aa6f9865af"} Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.096758 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72189e3b-d3ab-48b3-a4e8-2a6109e97409","Type":"ContainerStarted","Data":"875a2ccaaabc4d457f8aafb42a8e0da7aeef3421142c926052ca334ebaea3286"} Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.097141 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.374961 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.407237 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.433108 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="307b6180-63c1-4999-bab9-d3b939f8bddb" containerName="nova-scheduler-scheduler" containerID="cri-o://5f1f3e4160a06a7c4643d3a73d4b9c244b22cd12a58c1a877a42f923e5f29db7" gracePeriod=30 Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.463680 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.729714 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.753805 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-m4hqb"] Feb 17 18:05:30 crc kubenswrapper[4680]: E0217 18:05:30.754212 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eea2a6c-918d-47a3-b782-9a0f035b6bdc" containerName="nova-manage" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.754230 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eea2a6c-918d-47a3-b782-9a0f035b6bdc" containerName="nova-manage" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.754452 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eea2a6c-918d-47a3-b782-9a0f035b6bdc" containerName="nova-manage" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.755447 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.837613 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-m4hqb"] Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.881549 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.881665 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.881696 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.881721 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbqmg\" (UniqueName: \"kubernetes.io/projected/250ac6e5-9c00-4f78-8833-9ffbd8e24549-kube-api-access-kbqmg\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.881754 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-config\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.881835 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.983480 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.983590 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.983634 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.983652 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.983675 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbqmg\" (UniqueName: \"kubernetes.io/projected/250ac6e5-9c00-4f78-8833-9ffbd8e24549-kube-api-access-kbqmg\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.983701 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-config\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.984504 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.988384 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.988676 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.988691 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-config\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:30 crc kubenswrapper[4680]: I0217 18:05:30.988683 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:31 crc kubenswrapper[4680]: I0217 18:05:31.010435 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbqmg\" (UniqueName: \"kubernetes.io/projected/250ac6e5-9c00-4f78-8833-9ffbd8e24549-kube-api-access-kbqmg\") pod \"dnsmasq-dns-89c5cd4d5-m4hqb\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:31 crc kubenswrapper[4680]: I0217 18:05:31.111242 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:31 crc kubenswrapper[4680]: I0217 18:05:31.136354 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 17 18:05:31 crc kubenswrapper[4680]: I0217 18:05:31.136385 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2aab84dd-8ed8-46fd-931a-31759485bb02","Type":"ContainerStarted","Data":"9999bdc9e6c7480674ef1ccfd2fae4249994c4c8195a8620e9925bf1f93b359e"} Feb 17 18:05:31 crc kubenswrapper[4680]: I0217 18:05:31.136400 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72189e3b-d3ab-48b3-a4e8-2a6109e97409","Type":"ContainerStarted","Data":"a68d47f44d90d3fbe0655f7b2b3d037fb22919baa0981f23ff4c35d870253b2a"} Feb 17 18:05:31 crc kubenswrapper[4680]: I0217 18:05:31.211986 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.211965215 podStartE2EDuration="2.211965215s" podCreationTimestamp="2026-02-17 18:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:05:31.194964652 +0000 UTC m=+1260.745863331" watchObservedRunningTime="2026-02-17 18:05:31.211965215 +0000 UTC m=+1260.762863894" Feb 17 18:05:31 crc kubenswrapper[4680]: I0217 18:05:31.703868 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-m4hqb"] Feb 17 18:05:32 crc kubenswrapper[4680]: I0217 18:05:32.164659 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72189e3b-d3ab-48b3-a4e8-2a6109e97409","Type":"ContainerStarted","Data":"01bd8367c2435cbbb8f3ccde95991dad77fc90f9e13c320193e946ccfcabd5a0"} Feb 17 18:05:32 crc kubenswrapper[4680]: I0217 18:05:32.173835 4680 generic.go:334] "Generic (PLEG): container finished" podID="250ac6e5-9c00-4f78-8833-9ffbd8e24549" containerID="5084eda980bdcbe5faac73050e24bb80fe4b31fdf1567b40763cc17f139d3fef" exitCode=0 Feb 17 18:05:32 crc kubenswrapper[4680]: I0217 18:05:32.175073 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" event={"ID":"250ac6e5-9c00-4f78-8833-9ffbd8e24549","Type":"ContainerDied","Data":"5084eda980bdcbe5faac73050e24bb80fe4b31fdf1567b40763cc17f139d3fef"} Feb 17 18:05:32 crc kubenswrapper[4680]: I0217 18:05:32.175109 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" event={"ID":"250ac6e5-9c00-4f78-8833-9ffbd8e24549","Type":"ContainerStarted","Data":"55f3576d9a137732f361732ca337cd23e8a77665307a110cb4a568e65b42158a"} Feb 17 18:05:32 crc kubenswrapper[4680]: I0217 18:05:32.175252 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="64de592f-9a5f-41c8-8333-66a6173d819d" containerName="nova-api-log" containerID="cri-o://870fefeb1cf762eb5d24c8843449d0487e1b58211693ceda4da9098c9d110253" gracePeriod=30 Feb 17 18:05:32 crc kubenswrapper[4680]: I0217 18:05:32.175405 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="64de592f-9a5f-41c8-8333-66a6173d819d" containerName="nova-api-api" containerID="cri-o://5d82e37032ab20102b1153d9e12ef037ad79a81d52f40f95000664a2dee85b5d" gracePeriod=30 Feb 17 18:05:32 crc kubenswrapper[4680]: I0217 18:05:32.799247 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 18:05:32 crc kubenswrapper[4680]: I0217 18:05:32.827883 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/307b6180-63c1-4999-bab9-d3b939f8bddb-combined-ca-bundle\") pod \"307b6180-63c1-4999-bab9-d3b939f8bddb\" (UID: \"307b6180-63c1-4999-bab9-d3b939f8bddb\") " Feb 17 18:05:32 crc kubenswrapper[4680]: I0217 18:05:32.827997 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/307b6180-63c1-4999-bab9-d3b939f8bddb-config-data\") pod \"307b6180-63c1-4999-bab9-d3b939f8bddb\" (UID: \"307b6180-63c1-4999-bab9-d3b939f8bddb\") " Feb 17 18:05:32 crc kubenswrapper[4680]: I0217 18:05:32.828050 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq67g\" (UniqueName: \"kubernetes.io/projected/307b6180-63c1-4999-bab9-d3b939f8bddb-kube-api-access-pq67g\") pod \"307b6180-63c1-4999-bab9-d3b939f8bddb\" (UID: \"307b6180-63c1-4999-bab9-d3b939f8bddb\") " Feb 17 18:05:32 crc kubenswrapper[4680]: I0217 18:05:32.841599 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/307b6180-63c1-4999-bab9-d3b939f8bddb-kube-api-access-pq67g" (OuterVolumeSpecName: "kube-api-access-pq67g") pod "307b6180-63c1-4999-bab9-d3b939f8bddb" (UID: "307b6180-63c1-4999-bab9-d3b939f8bddb"). InnerVolumeSpecName "kube-api-access-pq67g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:05:32 crc kubenswrapper[4680]: I0217 18:05:32.870701 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/307b6180-63c1-4999-bab9-d3b939f8bddb-config-data" (OuterVolumeSpecName: "config-data") pod "307b6180-63c1-4999-bab9-d3b939f8bddb" (UID: "307b6180-63c1-4999-bab9-d3b939f8bddb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:32 crc kubenswrapper[4680]: I0217 18:05:32.896684 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/307b6180-63c1-4999-bab9-d3b939f8bddb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "307b6180-63c1-4999-bab9-d3b939f8bddb" (UID: "307b6180-63c1-4999-bab9-d3b939f8bddb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:32 crc kubenswrapper[4680]: I0217 18:05:32.930747 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/307b6180-63c1-4999-bab9-d3b939f8bddb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:32 crc kubenswrapper[4680]: I0217 18:05:32.930787 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/307b6180-63c1-4999-bab9-d3b939f8bddb-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:32 crc kubenswrapper[4680]: I0217 18:05:32.930798 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq67g\" (UniqueName: \"kubernetes.io/projected/307b6180-63c1-4999-bab9-d3b939f8bddb-kube-api-access-pq67g\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.202827 4680 generic.go:334] "Generic (PLEG): container finished" podID="307b6180-63c1-4999-bab9-d3b939f8bddb" containerID="5f1f3e4160a06a7c4643d3a73d4b9c244b22cd12a58c1a877a42f923e5f29db7" exitCode=0 Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.203295 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"307b6180-63c1-4999-bab9-d3b939f8bddb","Type":"ContainerDied","Data":"5f1f3e4160a06a7c4643d3a73d4b9c244b22cd12a58c1a877a42f923e5f29db7"} Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.203350 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"307b6180-63c1-4999-bab9-d3b939f8bddb","Type":"ContainerDied","Data":"6f46f53c162cf2f42814330e194969edebebcb757c9ef8f66f29d6e9a550c2f8"} Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.203372 4680 scope.go:117] "RemoveContainer" containerID="5f1f3e4160a06a7c4643d3a73d4b9c244b22cd12a58c1a877a42f923e5f29db7" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.203598 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.224306 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" event={"ID":"250ac6e5-9c00-4f78-8833-9ffbd8e24549","Type":"ContainerStarted","Data":"92ee09ea902edffb5451e9f5f4819c500d3a8c6500b456ab1fbfa7f7c3b0ac95"} Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.224377 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.253707 4680 generic.go:334] "Generic (PLEG): container finished" podID="64de592f-9a5f-41c8-8333-66a6173d819d" containerID="870fefeb1cf762eb5d24c8843449d0487e1b58211693ceda4da9098c9d110253" exitCode=143 Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.253751 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64de592f-9a5f-41c8-8333-66a6173d819d","Type":"ContainerDied","Data":"870fefeb1cf762eb5d24c8843449d0487e1b58211693ceda4da9098c9d110253"} Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.275693 4680 scope.go:117] "RemoveContainer" containerID="5f1f3e4160a06a7c4643d3a73d4b9c244b22cd12a58c1a877a42f923e5f29db7" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.283167 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" podStartSLOduration=3.283140681 podStartE2EDuration="3.283140681s" podCreationTimestamp="2026-02-17 18:05:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:05:33.262737972 +0000 UTC m=+1262.813636651" watchObservedRunningTime="2026-02-17 18:05:33.283140681 +0000 UTC m=+1262.834039360" Feb 17 18:05:33 crc kubenswrapper[4680]: E0217 18:05:33.283553 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f1f3e4160a06a7c4643d3a73d4b9c244b22cd12a58c1a877a42f923e5f29db7\": container with ID starting with 5f1f3e4160a06a7c4643d3a73d4b9c244b22cd12a58c1a877a42f923e5f29db7 not found: ID does not exist" containerID="5f1f3e4160a06a7c4643d3a73d4b9c244b22cd12a58c1a877a42f923e5f29db7" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.283591 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f1f3e4160a06a7c4643d3a73d4b9c244b22cd12a58c1a877a42f923e5f29db7"} err="failed to get container status \"5f1f3e4160a06a7c4643d3a73d4b9c244b22cd12a58c1a877a42f923e5f29db7\": rpc error: code = NotFound desc = could not find container \"5f1f3e4160a06a7c4643d3a73d4b9c244b22cd12a58c1a877a42f923e5f29db7\": container with ID starting with 5f1f3e4160a06a7c4643d3a73d4b9c244b22cd12a58c1a877a42f923e5f29db7 not found: ID does not exist" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.318386 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.347375 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.410413 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 18:05:33 crc kubenswrapper[4680]: E0217 18:05:33.411360 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="307b6180-63c1-4999-bab9-d3b939f8bddb" containerName="nova-scheduler-scheduler" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.411377 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="307b6180-63c1-4999-bab9-d3b939f8bddb" containerName="nova-scheduler-scheduler" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.411726 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="307b6180-63c1-4999-bab9-d3b939f8bddb" containerName="nova-scheduler-scheduler" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.416884 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.423841 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.454664 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.472235 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/725cb326-b867-449a-baca-95ca7093c415-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"725cb326-b867-449a-baca-95ca7093c415\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.472379 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/725cb326-b867-449a-baca-95ca7093c415-config-data\") pod \"nova-scheduler-0\" (UID: \"725cb326-b867-449a-baca-95ca7093c415\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.472661 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm685\" (UniqueName: \"kubernetes.io/projected/725cb326-b867-449a-baca-95ca7093c415-kube-api-access-fm685\") pod \"nova-scheduler-0\" (UID: \"725cb326-b867-449a-baca-95ca7093c415\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.575627 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/725cb326-b867-449a-baca-95ca7093c415-config-data\") pod \"nova-scheduler-0\" (UID: \"725cb326-b867-449a-baca-95ca7093c415\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.575799 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm685\" (UniqueName: \"kubernetes.io/projected/725cb326-b867-449a-baca-95ca7093c415-kube-api-access-fm685\") pod \"nova-scheduler-0\" (UID: \"725cb326-b867-449a-baca-95ca7093c415\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.575924 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/725cb326-b867-449a-baca-95ca7093c415-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"725cb326-b867-449a-baca-95ca7093c415\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.585039 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/725cb326-b867-449a-baca-95ca7093c415-config-data\") pod \"nova-scheduler-0\" (UID: \"725cb326-b867-449a-baca-95ca7093c415\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.603241 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/725cb326-b867-449a-baca-95ca7093c415-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"725cb326-b867-449a-baca-95ca7093c415\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.610031 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm685\" (UniqueName: \"kubernetes.io/projected/725cb326-b867-449a-baca-95ca7093c415-kube-api-access-fm685\") pod \"nova-scheduler-0\" (UID: \"725cb326-b867-449a-baca-95ca7093c415\") " pod="openstack/nova-scheduler-0" Feb 17 18:05:33 crc kubenswrapper[4680]: I0217 18:05:33.762636 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 18:05:34 crc kubenswrapper[4680]: I0217 18:05:34.265040 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72189e3b-d3ab-48b3-a4e8-2a6109e97409","Type":"ContainerStarted","Data":"bc122f830a3abebedd821d5694834ff0a0277d01647e1697e54cc8e55d7252ea"} Feb 17 18:05:34 crc kubenswrapper[4680]: I0217 18:05:34.296942 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.397134642 podStartE2EDuration="6.29692396s" podCreationTimestamp="2026-02-17 18:05:28 +0000 UTC" firstStartedPulling="2026-02-17 18:05:29.113409091 +0000 UTC m=+1258.664307770" lastFinishedPulling="2026-02-17 18:05:33.013198409 +0000 UTC m=+1262.564097088" observedRunningTime="2026-02-17 18:05:34.289159727 +0000 UTC m=+1263.840058426" watchObservedRunningTime="2026-02-17 18:05:34.29692396 +0000 UTC m=+1263.847822639" Feb 17 18:05:34 crc kubenswrapper[4680]: I0217 18:05:34.398343 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 18:05:35 crc kubenswrapper[4680]: I0217 18:05:35.115230 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="307b6180-63c1-4999-bab9-d3b939f8bddb" path="/var/lib/kubelet/pods/307b6180-63c1-4999-bab9-d3b939f8bddb/volumes" Feb 17 18:05:35 crc kubenswrapper[4680]: I0217 18:05:35.285418 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"725cb326-b867-449a-baca-95ca7093c415","Type":"ContainerStarted","Data":"a21761f29ca820d913bfa7243f7ac5c6e7117b43a163254cfec6d96484061bd1"} Feb 17 18:05:35 crc kubenswrapper[4680]: I0217 18:05:35.285485 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 18:05:35 crc kubenswrapper[4680]: I0217 18:05:35.285503 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"725cb326-b867-449a-baca-95ca7093c415","Type":"ContainerStarted","Data":"8a8addec9ffeba819d71ca9b932bfaf41384b7c04aa1bde13cce4ec3b8f73acb"} Feb 17 18:05:35 crc kubenswrapper[4680]: I0217 18:05:35.305040 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.305021861 podStartE2EDuration="2.305021861s" podCreationTimestamp="2026-02-17 18:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:05:35.302779411 +0000 UTC m=+1264.853678090" watchObservedRunningTime="2026-02-17 18:05:35.305021861 +0000 UTC m=+1264.855920540" Feb 17 18:05:35 crc kubenswrapper[4680]: I0217 18:05:35.588437 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:05:35 crc kubenswrapper[4680]: I0217 18:05:35.588493 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.295610 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.295638 4680 generic.go:334] "Generic (PLEG): container finished" podID="64de592f-9a5f-41c8-8333-66a6173d819d" containerID="5d82e37032ab20102b1153d9e12ef037ad79a81d52f40f95000664a2dee85b5d" exitCode=0 Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.295657 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64de592f-9a5f-41c8-8333-66a6173d819d","Type":"ContainerDied","Data":"5d82e37032ab20102b1153d9e12ef037ad79a81d52f40f95000664a2dee85b5d"} Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.297373 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64de592f-9a5f-41c8-8333-66a6173d819d","Type":"ContainerDied","Data":"0a11b94b5fe5a4a94a9ff2e29cd434a86a212d610ec284aeb08adf04e5ff2e60"} Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.297391 4680 scope.go:117] "RemoveContainer" containerID="5d82e37032ab20102b1153d9e12ef037ad79a81d52f40f95000664a2dee85b5d" Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.333693 4680 scope.go:117] "RemoveContainer" containerID="870fefeb1cf762eb5d24c8843449d0487e1b58211693ceda4da9098c9d110253" Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.349568 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64de592f-9a5f-41c8-8333-66a6173d819d-combined-ca-bundle\") pod \"64de592f-9a5f-41c8-8333-66a6173d819d\" (UID: \"64de592f-9a5f-41c8-8333-66a6173d819d\") " Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.349623 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6bbp\" (UniqueName: \"kubernetes.io/projected/64de592f-9a5f-41c8-8333-66a6173d819d-kube-api-access-r6bbp\") pod \"64de592f-9a5f-41c8-8333-66a6173d819d\" (UID: \"64de592f-9a5f-41c8-8333-66a6173d819d\") " Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.349717 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64de592f-9a5f-41c8-8333-66a6173d819d-logs\") pod \"64de592f-9a5f-41c8-8333-66a6173d819d\" (UID: \"64de592f-9a5f-41c8-8333-66a6173d819d\") " Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.349744 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64de592f-9a5f-41c8-8333-66a6173d819d-config-data\") pod \"64de592f-9a5f-41c8-8333-66a6173d819d\" (UID: \"64de592f-9a5f-41c8-8333-66a6173d819d\") " Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.351022 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64de592f-9a5f-41c8-8333-66a6173d819d-logs" (OuterVolumeSpecName: "logs") pod "64de592f-9a5f-41c8-8333-66a6173d819d" (UID: "64de592f-9a5f-41c8-8333-66a6173d819d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.374450 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64de592f-9a5f-41c8-8333-66a6173d819d-kube-api-access-r6bbp" (OuterVolumeSpecName: "kube-api-access-r6bbp") pod "64de592f-9a5f-41c8-8333-66a6173d819d" (UID: "64de592f-9a5f-41c8-8333-66a6173d819d"). InnerVolumeSpecName "kube-api-access-r6bbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.384600 4680 scope.go:117] "RemoveContainer" containerID="5d82e37032ab20102b1153d9e12ef037ad79a81d52f40f95000664a2dee85b5d" Feb 17 18:05:36 crc kubenswrapper[4680]: E0217 18:05:36.388485 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d82e37032ab20102b1153d9e12ef037ad79a81d52f40f95000664a2dee85b5d\": container with ID starting with 5d82e37032ab20102b1153d9e12ef037ad79a81d52f40f95000664a2dee85b5d not found: ID does not exist" containerID="5d82e37032ab20102b1153d9e12ef037ad79a81d52f40f95000664a2dee85b5d" Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.388525 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d82e37032ab20102b1153d9e12ef037ad79a81d52f40f95000664a2dee85b5d"} err="failed to get container status \"5d82e37032ab20102b1153d9e12ef037ad79a81d52f40f95000664a2dee85b5d\": rpc error: code = NotFound desc = could not find container \"5d82e37032ab20102b1153d9e12ef037ad79a81d52f40f95000664a2dee85b5d\": container with ID starting with 5d82e37032ab20102b1153d9e12ef037ad79a81d52f40f95000664a2dee85b5d not found: ID does not exist" Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.388547 4680 scope.go:117] "RemoveContainer" containerID="870fefeb1cf762eb5d24c8843449d0487e1b58211693ceda4da9098c9d110253" Feb 17 18:05:36 crc kubenswrapper[4680]: E0217 18:05:36.402567 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"870fefeb1cf762eb5d24c8843449d0487e1b58211693ceda4da9098c9d110253\": container with ID starting with 870fefeb1cf762eb5d24c8843449d0487e1b58211693ceda4da9098c9d110253 not found: ID does not exist" containerID="870fefeb1cf762eb5d24c8843449d0487e1b58211693ceda4da9098c9d110253" Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.402613 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"870fefeb1cf762eb5d24c8843449d0487e1b58211693ceda4da9098c9d110253"} err="failed to get container status \"870fefeb1cf762eb5d24c8843449d0487e1b58211693ceda4da9098c9d110253\": rpc error: code = NotFound desc = could not find container \"870fefeb1cf762eb5d24c8843449d0487e1b58211693ceda4da9098c9d110253\": container with ID starting with 870fefeb1cf762eb5d24c8843449d0487e1b58211693ceda4da9098c9d110253 not found: ID does not exist" Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.423852 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64de592f-9a5f-41c8-8333-66a6173d819d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64de592f-9a5f-41c8-8333-66a6173d819d" (UID: "64de592f-9a5f-41c8-8333-66a6173d819d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.451669 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64de592f-9a5f-41c8-8333-66a6173d819d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.451700 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6bbp\" (UniqueName: \"kubernetes.io/projected/64de592f-9a5f-41c8-8333-66a6173d819d-kube-api-access-r6bbp\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.451711 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64de592f-9a5f-41c8-8333-66a6173d819d-logs\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.455555 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64de592f-9a5f-41c8-8333-66a6173d819d-config-data" (OuterVolumeSpecName: "config-data") pod "64de592f-9a5f-41c8-8333-66a6173d819d" (UID: "64de592f-9a5f-41c8-8333-66a6173d819d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:36 crc kubenswrapper[4680]: I0217 18:05:36.556536 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64de592f-9a5f-41c8-8333-66a6173d819d-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.321381 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.389392 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.415418 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.429238 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 18:05:37 crc kubenswrapper[4680]: E0217 18:05:37.429797 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64de592f-9a5f-41c8-8333-66a6173d819d" containerName="nova-api-api" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.429816 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="64de592f-9a5f-41c8-8333-66a6173d819d" containerName="nova-api-api" Feb 17 18:05:37 crc kubenswrapper[4680]: E0217 18:05:37.429853 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64de592f-9a5f-41c8-8333-66a6173d819d" containerName="nova-api-log" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.429860 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="64de592f-9a5f-41c8-8333-66a6173d819d" containerName="nova-api-log" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.430105 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="64de592f-9a5f-41c8-8333-66a6173d819d" containerName="nova-api-log" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.430126 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="64de592f-9a5f-41c8-8333-66a6173d819d" containerName="nova-api-api" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.431397 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.434194 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.434482 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.438020 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.444681 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.480488 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.480576 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb18be12-e29f-410a-9ada-860058113aa3-logs\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.480605 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.480623 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4bsk\" (UniqueName: \"kubernetes.io/projected/eb18be12-e29f-410a-9ada-860058113aa3-kube-api-access-s4bsk\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.480642 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-public-tls-certs\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.480672 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-config-data\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.582395 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.582478 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb18be12-e29f-410a-9ada-860058113aa3-logs\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.582510 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.582530 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4bsk\" (UniqueName: \"kubernetes.io/projected/eb18be12-e29f-410a-9ada-860058113aa3-kube-api-access-s4bsk\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.582548 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-public-tls-certs\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.582586 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-config-data\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.583752 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb18be12-e29f-410a-9ada-860058113aa3-logs\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.587204 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-config-data\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.588422 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-public-tls-certs\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.589156 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.590037 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.606531 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4bsk\" (UniqueName: \"kubernetes.io/projected/eb18be12-e29f-410a-9ada-860058113aa3-kube-api-access-s4bsk\") pod \"nova-api-0\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " pod="openstack/nova-api-0" Feb 17 18:05:37 crc kubenswrapper[4680]: I0217 18:05:37.759199 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 18:05:38 crc kubenswrapper[4680]: I0217 18:05:38.315796 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 18:05:38 crc kubenswrapper[4680]: I0217 18:05:38.762682 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 18:05:39 crc kubenswrapper[4680]: I0217 18:05:39.118076 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64de592f-9a5f-41c8-8333-66a6173d819d" path="/var/lib/kubelet/pods/64de592f-9a5f-41c8-8333-66a6173d819d/volumes" Feb 17 18:05:39 crc kubenswrapper[4680]: I0217 18:05:39.245081 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:05:39 crc kubenswrapper[4680]: I0217 18:05:39.245436 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerName="ceilometer-central-agent" containerID="cri-o://5e5d1f2959951a5dc6e4eea7b27814a5fa0d394a43aa60b119d1e7aa6f9865af" gracePeriod=30 Feb 17 18:05:39 crc kubenswrapper[4680]: I0217 18:05:39.245566 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerName="ceilometer-notification-agent" containerID="cri-o://a68d47f44d90d3fbe0655f7b2b3d037fb22919baa0981f23ff4c35d870253b2a" gracePeriod=30 Feb 17 18:05:39 crc kubenswrapper[4680]: I0217 18:05:39.245560 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerName="sg-core" containerID="cri-o://01bd8367c2435cbbb8f3ccde95991dad77fc90f9e13c320193e946ccfcabd5a0" gracePeriod=30 Feb 17 18:05:39 crc kubenswrapper[4680]: I0217 18:05:39.245586 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerName="proxy-httpd" containerID="cri-o://bc122f830a3abebedd821d5694834ff0a0277d01647e1697e54cc8e55d7252ea" gracePeriod=30 Feb 17 18:05:39 crc kubenswrapper[4680]: I0217 18:05:39.344781 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb18be12-e29f-410a-9ada-860058113aa3","Type":"ContainerStarted","Data":"bbdbf93767e11f09f62270c799255d7bed2c2e01a523af36b8098fb007f80b93"} Feb 17 18:05:39 crc kubenswrapper[4680]: I0217 18:05:39.344830 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb18be12-e29f-410a-9ada-860058113aa3","Type":"ContainerStarted","Data":"abae1c75d44842abbefd85ec7fe45257bcddc24a76e73e65bd47c51fc0458442"} Feb 17 18:05:39 crc kubenswrapper[4680]: I0217 18:05:39.344847 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb18be12-e29f-410a-9ada-860058113aa3","Type":"ContainerStarted","Data":"2b74df591a7b26aeaac45f9a58e7310a9d8b5f04a754009d3d7ca7e412415331"} Feb 17 18:05:39 crc kubenswrapper[4680]: I0217 18:05:39.385823 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.385805412 podStartE2EDuration="2.385805412s" podCreationTimestamp="2026-02-17 18:05:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:05:39.378107369 +0000 UTC m=+1268.929006048" watchObservedRunningTime="2026-02-17 18:05:39.385805412 +0000 UTC m=+1268.936704091" Feb 17 18:05:39 crc kubenswrapper[4680]: I0217 18:05:39.473712 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 17 18:05:40 crc kubenswrapper[4680]: I0217 18:05:40.383740 4680 generic.go:334] "Generic (PLEG): container finished" podID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerID="bc122f830a3abebedd821d5694834ff0a0277d01647e1697e54cc8e55d7252ea" exitCode=0 Feb 17 18:05:40 crc kubenswrapper[4680]: I0217 18:05:40.383994 4680 generic.go:334] "Generic (PLEG): container finished" podID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerID="01bd8367c2435cbbb8f3ccde95991dad77fc90f9e13c320193e946ccfcabd5a0" exitCode=2 Feb 17 18:05:40 crc kubenswrapper[4680]: I0217 18:05:40.384008 4680 generic.go:334] "Generic (PLEG): container finished" podID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerID="a68d47f44d90d3fbe0655f7b2b3d037fb22919baa0981f23ff4c35d870253b2a" exitCode=0 Feb 17 18:05:40 crc kubenswrapper[4680]: I0217 18:05:40.383946 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72189e3b-d3ab-48b3-a4e8-2a6109e97409","Type":"ContainerDied","Data":"bc122f830a3abebedd821d5694834ff0a0277d01647e1697e54cc8e55d7252ea"} Feb 17 18:05:40 crc kubenswrapper[4680]: I0217 18:05:40.384106 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72189e3b-d3ab-48b3-a4e8-2a6109e97409","Type":"ContainerDied","Data":"01bd8367c2435cbbb8f3ccde95991dad77fc90f9e13c320193e946ccfcabd5a0"} Feb 17 18:05:40 crc kubenswrapper[4680]: I0217 18:05:40.384123 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72189e3b-d3ab-48b3-a4e8-2a6109e97409","Type":"ContainerDied","Data":"a68d47f44d90d3fbe0655f7b2b3d037fb22919baa0981f23ff4c35d870253b2a"} Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.123266 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.189610 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-kgh75"] Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.190068 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-kgh75" podUID="434d2204-587d-4dfe-b473-32490d402e2b" containerName="dnsmasq-dns" containerID="cri-o://48dedf604a5f7d3ea1176543be3ba7bdbc70e4af0ae2b34751c8680b36eb8295" gracePeriod=10 Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.395871 4680 generic.go:334] "Generic (PLEG): container finished" podID="434d2204-587d-4dfe-b473-32490d402e2b" containerID="48dedf604a5f7d3ea1176543be3ba7bdbc70e4af0ae2b34751c8680b36eb8295" exitCode=0 Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.396192 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-kgh75" event={"ID":"434d2204-587d-4dfe-b473-32490d402e2b","Type":"ContainerDied","Data":"48dedf604a5f7d3ea1176543be3ba7bdbc70e4af0ae2b34751c8680b36eb8295"} Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.803506 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.880831 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ld4p\" (UniqueName: \"kubernetes.io/projected/434d2204-587d-4dfe-b473-32490d402e2b-kube-api-access-7ld4p\") pod \"434d2204-587d-4dfe-b473-32490d402e2b\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.881211 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-config\") pod \"434d2204-587d-4dfe-b473-32490d402e2b\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.881271 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-ovsdbserver-sb\") pod \"434d2204-587d-4dfe-b473-32490d402e2b\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.881355 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-dns-swift-storage-0\") pod \"434d2204-587d-4dfe-b473-32490d402e2b\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.881384 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-ovsdbserver-nb\") pod \"434d2204-587d-4dfe-b473-32490d402e2b\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.881430 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-dns-svc\") pod \"434d2204-587d-4dfe-b473-32490d402e2b\" (UID: \"434d2204-587d-4dfe-b473-32490d402e2b\") " Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.924553 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/434d2204-587d-4dfe-b473-32490d402e2b-kube-api-access-7ld4p" (OuterVolumeSpecName: "kube-api-access-7ld4p") pod "434d2204-587d-4dfe-b473-32490d402e2b" (UID: "434d2204-587d-4dfe-b473-32490d402e2b"). InnerVolumeSpecName "kube-api-access-7ld4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.962902 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "434d2204-587d-4dfe-b473-32490d402e2b" (UID: "434d2204-587d-4dfe-b473-32490d402e2b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.977182 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "434d2204-587d-4dfe-b473-32490d402e2b" (UID: "434d2204-587d-4dfe-b473-32490d402e2b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.986480 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ld4p\" (UniqueName: \"kubernetes.io/projected/434d2204-587d-4dfe-b473-32490d402e2b-kube-api-access-7ld4p\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.986512 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.986525 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.991501 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-config" (OuterVolumeSpecName: "config") pod "434d2204-587d-4dfe-b473-32490d402e2b" (UID: "434d2204-587d-4dfe-b473-32490d402e2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:05:41 crc kubenswrapper[4680]: I0217 18:05:41.992304 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "434d2204-587d-4dfe-b473-32490d402e2b" (UID: "434d2204-587d-4dfe-b473-32490d402e2b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:05:42 crc kubenswrapper[4680]: I0217 18:05:42.000369 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "434d2204-587d-4dfe-b473-32490d402e2b" (UID: "434d2204-587d-4dfe-b473-32490d402e2b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:05:42 crc kubenswrapper[4680]: I0217 18:05:42.088526 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:42 crc kubenswrapper[4680]: I0217 18:05:42.088555 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:42 crc kubenswrapper[4680]: I0217 18:05:42.088566 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/434d2204-587d-4dfe-b473-32490d402e2b-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:42 crc kubenswrapper[4680]: I0217 18:05:42.407654 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-kgh75" event={"ID":"434d2204-587d-4dfe-b473-32490d402e2b","Type":"ContainerDied","Data":"7dbb0b678b46b3f23ba9e9c1ebb645fb2ace8ae6e92a2cc97d35f23d76527a60"} Feb 17 18:05:42 crc kubenswrapper[4680]: I0217 18:05:42.407718 4680 scope.go:117] "RemoveContainer" containerID="48dedf604a5f7d3ea1176543be3ba7bdbc70e4af0ae2b34751c8680b36eb8295" Feb 17 18:05:42 crc kubenswrapper[4680]: I0217 18:05:42.409000 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-kgh75" Feb 17 18:05:42 crc kubenswrapper[4680]: I0217 18:05:42.438290 4680 scope.go:117] "RemoveContainer" containerID="b8068b3d14f281f98876d3303e53790479ae2e5d358e554b2b4c424c21946540" Feb 17 18:05:42 crc kubenswrapper[4680]: I0217 18:05:42.458279 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-kgh75"] Feb 17 18:05:42 crc kubenswrapper[4680]: I0217 18:05:42.472054 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-kgh75"] Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.118071 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="434d2204-587d-4dfe-b473-32490d402e2b" path="/var/lib/kubelet/pods/434d2204-587d-4dfe-b473-32490d402e2b/volumes" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.418654 4680 generic.go:334] "Generic (PLEG): container finished" podID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerID="5e5d1f2959951a5dc6e4eea7b27814a5fa0d394a43aa60b119d1e7aa6f9865af" exitCode=0 Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.418794 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72189e3b-d3ab-48b3-a4e8-2a6109e97409","Type":"ContainerDied","Data":"5e5d1f2959951a5dc6e4eea7b27814a5fa0d394a43aa60b119d1e7aa6f9865af"} Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.419114 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72189e3b-d3ab-48b3-a4e8-2a6109e97409","Type":"ContainerDied","Data":"875a2ccaaabc4d457f8aafb42a8e0da7aeef3421142c926052ca334ebaea3286"} Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.419130 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="875a2ccaaabc4d457f8aafb42a8e0da7aeef3421142c926052ca334ebaea3286" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.454235 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.614480 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgdwv\" (UniqueName: \"kubernetes.io/projected/72189e3b-d3ab-48b3-a4e8-2a6109e97409-kube-api-access-jgdwv\") pod \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.614528 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-ceilometer-tls-certs\") pod \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.614592 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72189e3b-d3ab-48b3-a4e8-2a6109e97409-run-httpd\") pod \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.614616 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-sg-core-conf-yaml\") pod \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.614638 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-combined-ca-bundle\") pod \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.614670 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72189e3b-d3ab-48b3-a4e8-2a6109e97409-log-httpd\") pod \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.614743 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-config-data\") pod \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.614866 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-scripts\") pod \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\" (UID: \"72189e3b-d3ab-48b3-a4e8-2a6109e97409\") " Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.616399 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72189e3b-d3ab-48b3-a4e8-2a6109e97409-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "72189e3b-d3ab-48b3-a4e8-2a6109e97409" (UID: "72189e3b-d3ab-48b3-a4e8-2a6109e97409"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.616668 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72189e3b-d3ab-48b3-a4e8-2a6109e97409-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "72189e3b-d3ab-48b3-a4e8-2a6109e97409" (UID: "72189e3b-d3ab-48b3-a4e8-2a6109e97409"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.622564 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-scripts" (OuterVolumeSpecName: "scripts") pod "72189e3b-d3ab-48b3-a4e8-2a6109e97409" (UID: "72189e3b-d3ab-48b3-a4e8-2a6109e97409"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.626637 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72189e3b-d3ab-48b3-a4e8-2a6109e97409-kube-api-access-jgdwv" (OuterVolumeSpecName: "kube-api-access-jgdwv") pod "72189e3b-d3ab-48b3-a4e8-2a6109e97409" (UID: "72189e3b-d3ab-48b3-a4e8-2a6109e97409"). InnerVolumeSpecName "kube-api-access-jgdwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.660914 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "72189e3b-d3ab-48b3-a4e8-2a6109e97409" (UID: "72189e3b-d3ab-48b3-a4e8-2a6109e97409"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.677763 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "72189e3b-d3ab-48b3-a4e8-2a6109e97409" (UID: "72189e3b-d3ab-48b3-a4e8-2a6109e97409"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.699556 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72189e3b-d3ab-48b3-a4e8-2a6109e97409" (UID: "72189e3b-d3ab-48b3-a4e8-2a6109e97409"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.717142 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.717180 4680 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.717194 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgdwv\" (UniqueName: \"kubernetes.io/projected/72189e3b-d3ab-48b3-a4e8-2a6109e97409-kube-api-access-jgdwv\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.717205 4680 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72189e3b-d3ab-48b3-a4e8-2a6109e97409-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.717215 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.717226 4680 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.717237 4680 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72189e3b-d3ab-48b3-a4e8-2a6109e97409-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.750702 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-config-data" (OuterVolumeSpecName: "config-data") pod "72189e3b-d3ab-48b3-a4e8-2a6109e97409" (UID: "72189e3b-d3ab-48b3-a4e8-2a6109e97409"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.762993 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.792143 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 18:05:43 crc kubenswrapper[4680]: I0217 18:05:43.819185 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72189e3b-d3ab-48b3-a4e8-2a6109e97409-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.428452 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.470930 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.478475 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.502906 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.516972 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:05:44 crc kubenswrapper[4680]: E0217 18:05:44.517923 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerName="ceilometer-notification-agent" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.517948 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerName="ceilometer-notification-agent" Feb 17 18:05:44 crc kubenswrapper[4680]: E0217 18:05:44.517974 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="434d2204-587d-4dfe-b473-32490d402e2b" containerName="dnsmasq-dns" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.517983 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="434d2204-587d-4dfe-b473-32490d402e2b" containerName="dnsmasq-dns" Feb 17 18:05:44 crc kubenswrapper[4680]: E0217 18:05:44.518006 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerName="ceilometer-central-agent" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.518016 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerName="ceilometer-central-agent" Feb 17 18:05:44 crc kubenswrapper[4680]: E0217 18:05:44.518030 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerName="proxy-httpd" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.518039 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerName="proxy-httpd" Feb 17 18:05:44 crc kubenswrapper[4680]: E0217 18:05:44.518053 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerName="sg-core" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.518061 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerName="sg-core" Feb 17 18:05:44 crc kubenswrapper[4680]: E0217 18:05:44.518078 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="434d2204-587d-4dfe-b473-32490d402e2b" containerName="init" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.518086 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="434d2204-587d-4dfe-b473-32490d402e2b" containerName="init" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.518344 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerName="sg-core" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.521757 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="434d2204-587d-4dfe-b473-32490d402e2b" containerName="dnsmasq-dns" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.522041 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerName="ceilometer-central-agent" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.522078 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerName="proxy-httpd" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.522092 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" containerName="ceilometer-notification-agent" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.529174 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.533092 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.533342 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.533938 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.545678 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.633379 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4b1bad-b08f-4697-a925-b3b943394bdb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.633427 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c4b1bad-b08f-4697-a925-b3b943394bdb-config-data\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.633444 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c4b1bad-b08f-4697-a925-b3b943394bdb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.633477 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c4b1bad-b08f-4697-a925-b3b943394bdb-scripts\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.633491 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c4b1bad-b08f-4697-a925-b3b943394bdb-run-httpd\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.633597 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c4b1bad-b08f-4697-a925-b3b943394bdb-log-httpd\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.633650 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm7dq\" (UniqueName: \"kubernetes.io/projected/7c4b1bad-b08f-4697-a925-b3b943394bdb-kube-api-access-hm7dq\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.633844 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c4b1bad-b08f-4697-a925-b3b943394bdb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.735357 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4b1bad-b08f-4697-a925-b3b943394bdb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.735411 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c4b1bad-b08f-4697-a925-b3b943394bdb-config-data\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.735426 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c4b1bad-b08f-4697-a925-b3b943394bdb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.735471 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c4b1bad-b08f-4697-a925-b3b943394bdb-scripts\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.735492 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c4b1bad-b08f-4697-a925-b3b943394bdb-run-httpd\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.735513 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c4b1bad-b08f-4697-a925-b3b943394bdb-log-httpd\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.735536 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm7dq\" (UniqueName: \"kubernetes.io/projected/7c4b1bad-b08f-4697-a925-b3b943394bdb-kube-api-access-hm7dq\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.735602 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c4b1bad-b08f-4697-a925-b3b943394bdb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.736171 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c4b1bad-b08f-4697-a925-b3b943394bdb-run-httpd\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.736490 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c4b1bad-b08f-4697-a925-b3b943394bdb-log-httpd\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.739415 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c4b1bad-b08f-4697-a925-b3b943394bdb-scripts\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.740055 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c4b1bad-b08f-4697-a925-b3b943394bdb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.740086 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4b1bad-b08f-4697-a925-b3b943394bdb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.753578 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c4b1bad-b08f-4697-a925-b3b943394bdb-config-data\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.754121 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm7dq\" (UniqueName: \"kubernetes.io/projected/7c4b1bad-b08f-4697-a925-b3b943394bdb-kube-api-access-hm7dq\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.755933 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c4b1bad-b08f-4697-a925-b3b943394bdb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7c4b1bad-b08f-4697-a925-b3b943394bdb\") " pod="openstack/ceilometer-0" Feb 17 18:05:44 crc kubenswrapper[4680]: I0217 18:05:44.874416 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 18:05:45 crc kubenswrapper[4680]: I0217 18:05:45.123701 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72189e3b-d3ab-48b3-a4e8-2a6109e97409" path="/var/lib/kubelet/pods/72189e3b-d3ab-48b3-a4e8-2a6109e97409/volumes" Feb 17 18:05:45 crc kubenswrapper[4680]: I0217 18:05:45.423120 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 18:05:45 crc kubenswrapper[4680]: I0217 18:05:45.459703 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c4b1bad-b08f-4697-a925-b3b943394bdb","Type":"ContainerStarted","Data":"dd85f7f2fb92b58524fb4bdc09c09373b65391f6618d9f1ddb22bae6ca6e4987"} Feb 17 18:05:46 crc kubenswrapper[4680]: I0217 18:05:46.469861 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c4b1bad-b08f-4697-a925-b3b943394bdb","Type":"ContainerStarted","Data":"f08464f01861c5cc75baa65bb07a0bc2651e748981b9cb81ca9fb202a7f9f7c2"} Feb 17 18:05:47 crc kubenswrapper[4680]: E0217 18:05:47.266357 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bd2854a_e9a2_41c3_92fc_1be5f01d44af.slice/crio-conmon-ebcdc95cd812523e9fa1dc8c9cd53f6dbb38efdce728e8663d559a07fe48a056.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bd2854a_e9a2_41c3_92fc_1be5f01d44af.slice/crio-ebcdc95cd812523e9fa1dc8c9cd53f6dbb38efdce728e8663d559a07fe48a056.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5b3e0bf_c836_4d70_a6a8_16829590ba9c.slice/crio-bfc024701faed933819be6dc1a433590a0abae932248ca5837911809f1c3e9e6.scope\": RecentStats: unable to find data in memory cache]" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.481182 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c4b1bad-b08f-4697-a925-b3b943394bdb","Type":"ContainerStarted","Data":"f0b94e59111e9becf526f06ef02e1da6f91bf7fe1fbd4c790d3a9615db7f36fa"} Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.484020 4680 generic.go:334] "Generic (PLEG): container finished" podID="5bd2854a-e9a2-41c3-92fc-1be5f01d44af" containerID="ebcdc95cd812523e9fa1dc8c9cd53f6dbb38efdce728e8663d559a07fe48a056" exitCode=137 Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.484126 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5bd2854a-e9a2-41c3-92fc-1be5f01d44af","Type":"ContainerDied","Data":"ebcdc95cd812523e9fa1dc8c9cd53f6dbb38efdce728e8663d559a07fe48a056"} Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.484163 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5bd2854a-e9a2-41c3-92fc-1be5f01d44af","Type":"ContainerDied","Data":"ffa8c21faa8eb97eccf646bfac8644d1de23e1b438dd96c73aab80fcd61272ae"} Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.484178 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffa8c21faa8eb97eccf646bfac8644d1de23e1b438dd96c73aab80fcd61272ae" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.486097 4680 generic.go:334] "Generic (PLEG): container finished" podID="a5b3e0bf-c836-4d70-a6a8-16829590ba9c" containerID="bfc024701faed933819be6dc1a433590a0abae932248ca5837911809f1c3e9e6" exitCode=137 Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.486145 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a5b3e0bf-c836-4d70-a6a8-16829590ba9c","Type":"ContainerDied","Data":"bfc024701faed933819be6dc1a433590a0abae932248ca5837911809f1c3e9e6"} Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.486161 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a5b3e0bf-c836-4d70-a6a8-16829590ba9c","Type":"ContainerDied","Data":"b988c929e00ea0217c4a503466ecae6d2529a8884f997f741459612854312e24"} Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.486173 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b988c929e00ea0217c4a503466ecae6d2529a8884f997f741459612854312e24" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.488235 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.497483 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.638138 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx48g\" (UniqueName: \"kubernetes.io/projected/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-kube-api-access-wx48g\") pod \"a5b3e0bf-c836-4d70-a6a8-16829590ba9c\" (UID: \"a5b3e0bf-c836-4d70-a6a8-16829590ba9c\") " Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.638239 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-logs\") pod \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\" (UID: \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\") " Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.638661 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-logs" (OuterVolumeSpecName: "logs") pod "5bd2854a-e9a2-41c3-92fc-1be5f01d44af" (UID: "5bd2854a-e9a2-41c3-92fc-1be5f01d44af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.638816 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-combined-ca-bundle\") pod \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\" (UID: \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\") " Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.639197 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7mvh\" (UniqueName: \"kubernetes.io/projected/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-kube-api-access-f7mvh\") pod \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\" (UID: \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\") " Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.639278 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-combined-ca-bundle\") pod \"a5b3e0bf-c836-4d70-a6a8-16829590ba9c\" (UID: \"a5b3e0bf-c836-4d70-a6a8-16829590ba9c\") " Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.639428 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-config-data\") pod \"a5b3e0bf-c836-4d70-a6a8-16829590ba9c\" (UID: \"a5b3e0bf-c836-4d70-a6a8-16829590ba9c\") " Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.639470 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-config-data\") pod \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\" (UID: \"5bd2854a-e9a2-41c3-92fc-1be5f01d44af\") " Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.639940 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-logs\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.665390 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-kube-api-access-f7mvh" (OuterVolumeSpecName: "kube-api-access-f7mvh") pod "5bd2854a-e9a2-41c3-92fc-1be5f01d44af" (UID: "5bd2854a-e9a2-41c3-92fc-1be5f01d44af"). InnerVolumeSpecName "kube-api-access-f7mvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.666390 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-kube-api-access-wx48g" (OuterVolumeSpecName: "kube-api-access-wx48g") pod "a5b3e0bf-c836-4d70-a6a8-16829590ba9c" (UID: "a5b3e0bf-c836-4d70-a6a8-16829590ba9c"). InnerVolumeSpecName "kube-api-access-wx48g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.680800 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-config-data" (OuterVolumeSpecName: "config-data") pod "a5b3e0bf-c836-4d70-a6a8-16829590ba9c" (UID: "a5b3e0bf-c836-4d70-a6a8-16829590ba9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.680811 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-config-data" (OuterVolumeSpecName: "config-data") pod "5bd2854a-e9a2-41c3-92fc-1be5f01d44af" (UID: "5bd2854a-e9a2-41c3-92fc-1be5f01d44af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.687693 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bd2854a-e9a2-41c3-92fc-1be5f01d44af" (UID: "5bd2854a-e9a2-41c3-92fc-1be5f01d44af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.705908 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5b3e0bf-c836-4d70-a6a8-16829590ba9c" (UID: "a5b3e0bf-c836-4d70-a6a8-16829590ba9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.742554 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.742595 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.742608 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx48g\" (UniqueName: \"kubernetes.io/projected/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-kube-api-access-wx48g\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.742621 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.742633 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7mvh\" (UniqueName: \"kubernetes.io/projected/5bd2854a-e9a2-41c3-92fc-1be5f01d44af-kube-api-access-f7mvh\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.742643 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b3e0bf-c836-4d70-a6a8-16829590ba9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.760587 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 18:05:47 crc kubenswrapper[4680]: I0217 18:05:47.760929 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.498110 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.498338 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c4b1bad-b08f-4697-a925-b3b943394bdb","Type":"ContainerStarted","Data":"9f933be905d5b761fbf29cd1b9dcc3f18e854f51ea43a8d263b36c36a031f06c"} Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.498414 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.548764 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.560833 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.578789 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.591745 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.609218 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 18:05:48 crc kubenswrapper[4680]: E0217 18:05:48.609956 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b3e0bf-c836-4d70-a6a8-16829590ba9c" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.610073 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b3e0bf-c836-4d70-a6a8-16829590ba9c" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 18:05:48 crc kubenswrapper[4680]: E0217 18:05:48.610160 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd2854a-e9a2-41c3-92fc-1be5f01d44af" containerName="nova-metadata-metadata" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.610230 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd2854a-e9a2-41c3-92fc-1be5f01d44af" containerName="nova-metadata-metadata" Feb 17 18:05:48 crc kubenswrapper[4680]: E0217 18:05:48.610309 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd2854a-e9a2-41c3-92fc-1be5f01d44af" containerName="nova-metadata-log" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.610427 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd2854a-e9a2-41c3-92fc-1be5f01d44af" containerName="nova-metadata-log" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.610732 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd2854a-e9a2-41c3-92fc-1be5f01d44af" containerName="nova-metadata-log" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.610832 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd2854a-e9a2-41c3-92fc-1be5f01d44af" containerName="nova-metadata-metadata" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.610926 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5b3e0bf-c836-4d70-a6a8-16829590ba9c" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.611772 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.616731 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.616994 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.617294 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.684629 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.686509 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.694134 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.694443 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.704176 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.720680 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.783008 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1fc559-7f55-4c70-b448-921d56e4a6c3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c1fc559-7f55-4c70-b448-921d56e4a6c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.784125 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1fc559-7f55-4c70-b448-921d56e4a6c3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c1fc559-7f55-4c70-b448-921d56e4a6c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.784223 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1fc559-7f55-4c70-b448-921d56e4a6c3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c1fc559-7f55-4c70-b448-921d56e4a6c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.784332 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwq4f\" (UniqueName: \"kubernetes.io/projected/6c1fc559-7f55-4c70-b448-921d56e4a6c3-kube-api-access-xwq4f\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c1fc559-7f55-4c70-b448-921d56e4a6c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.784379 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1fc559-7f55-4c70-b448-921d56e4a6c3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c1fc559-7f55-4c70-b448-921d56e4a6c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.787504 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eb18be12-e29f-410a-9ada-860058113aa3" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.194:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.787783 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eb18be12-e29f-410a-9ada-860058113aa3" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.194:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.885884 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwq4f\" (UniqueName: \"kubernetes.io/projected/6c1fc559-7f55-4c70-b448-921d56e4a6c3-kube-api-access-xwq4f\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c1fc559-7f55-4c70-b448-921d56e4a6c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.885965 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1fc559-7f55-4c70-b448-921d56e4a6c3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c1fc559-7f55-4c70-b448-921d56e4a6c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.886021 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-config-data\") pod \"nova-metadata-0\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " pod="openstack/nova-metadata-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.886071 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/565450d6-6be1-41fd-973f-6f69adc10f85-logs\") pod \"nova-metadata-0\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " pod="openstack/nova-metadata-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.886111 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1fc559-7f55-4c70-b448-921d56e4a6c3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c1fc559-7f55-4c70-b448-921d56e4a6c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.886266 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1fc559-7f55-4c70-b448-921d56e4a6c3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c1fc559-7f55-4c70-b448-921d56e4a6c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.886377 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " pod="openstack/nova-metadata-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.886426 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1fc559-7f55-4c70-b448-921d56e4a6c3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c1fc559-7f55-4c70-b448-921d56e4a6c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.886443 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vhmn\" (UniqueName: \"kubernetes.io/projected/565450d6-6be1-41fd-973f-6f69adc10f85-kube-api-access-9vhmn\") pod \"nova-metadata-0\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " pod="openstack/nova-metadata-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.886569 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " pod="openstack/nova-metadata-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.900160 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1fc559-7f55-4c70-b448-921d56e4a6c3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c1fc559-7f55-4c70-b448-921d56e4a6c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.914128 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1fc559-7f55-4c70-b448-921d56e4a6c3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c1fc559-7f55-4c70-b448-921d56e4a6c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.914342 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1fc559-7f55-4c70-b448-921d56e4a6c3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c1fc559-7f55-4c70-b448-921d56e4a6c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.915516 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1fc559-7f55-4c70-b448-921d56e4a6c3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c1fc559-7f55-4c70-b448-921d56e4a6c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.918224 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwq4f\" (UniqueName: \"kubernetes.io/projected/6c1fc559-7f55-4c70-b448-921d56e4a6c3-kube-api-access-xwq4f\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c1fc559-7f55-4c70-b448-921d56e4a6c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.956124 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.990403 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " pod="openstack/nova-metadata-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.990466 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vhmn\" (UniqueName: \"kubernetes.io/projected/565450d6-6be1-41fd-973f-6f69adc10f85-kube-api-access-9vhmn\") pod \"nova-metadata-0\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " pod="openstack/nova-metadata-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.990521 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " pod="openstack/nova-metadata-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.990584 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-config-data\") pod \"nova-metadata-0\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " pod="openstack/nova-metadata-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.990614 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/565450d6-6be1-41fd-973f-6f69adc10f85-logs\") pod \"nova-metadata-0\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " pod="openstack/nova-metadata-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.991098 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/565450d6-6be1-41fd-973f-6f69adc10f85-logs\") pod \"nova-metadata-0\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " pod="openstack/nova-metadata-0" Feb 17 18:05:48 crc kubenswrapper[4680]: I0217 18:05:48.997121 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " pod="openstack/nova-metadata-0" Feb 17 18:05:49 crc kubenswrapper[4680]: I0217 18:05:49.013046 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-config-data\") pod \"nova-metadata-0\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " pod="openstack/nova-metadata-0" Feb 17 18:05:49 crc kubenswrapper[4680]: I0217 18:05:49.013242 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " pod="openstack/nova-metadata-0" Feb 17 18:05:49 crc kubenswrapper[4680]: I0217 18:05:49.030937 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vhmn\" (UniqueName: \"kubernetes.io/projected/565450d6-6be1-41fd-973f-6f69adc10f85-kube-api-access-9vhmn\") pod \"nova-metadata-0\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " pod="openstack/nova-metadata-0" Feb 17 18:05:49 crc kubenswrapper[4680]: I0217 18:05:49.140670 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bd2854a-e9a2-41c3-92fc-1be5f01d44af" path="/var/lib/kubelet/pods/5bd2854a-e9a2-41c3-92fc-1be5f01d44af/volumes" Feb 17 18:05:49 crc kubenswrapper[4680]: I0217 18:05:49.142129 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5b3e0bf-c836-4d70-a6a8-16829590ba9c" path="/var/lib/kubelet/pods/a5b3e0bf-c836-4d70-a6a8-16829590ba9c/volumes" Feb 17 18:05:49 crc kubenswrapper[4680]: I0217 18:05:49.328539 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 18:05:49 crc kubenswrapper[4680]: W0217 18:05:49.560600 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c1fc559_7f55_4c70_b448_921d56e4a6c3.slice/crio-4772d67cfa6b900b57d09ed4b4b56ad1cf32dc6b35ba89e27c9829c7c04559a5 WatchSource:0}: Error finding container 4772d67cfa6b900b57d09ed4b4b56ad1cf32dc6b35ba89e27c9829c7c04559a5: Status 404 returned error can't find the container with id 4772d67cfa6b900b57d09ed4b4b56ad1cf32dc6b35ba89e27c9829c7c04559a5 Feb 17 18:05:49 crc kubenswrapper[4680]: I0217 18:05:49.575418 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 18:05:49 crc kubenswrapper[4680]: I0217 18:05:49.933247 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 18:05:49 crc kubenswrapper[4680]: W0217 18:05:49.942993 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod565450d6_6be1_41fd_973f_6f69adc10f85.slice/crio-e0a352ee5a20a90ed149c63ac9ab832b7c1cd1ab23ad83cf228b0a80575677b8 WatchSource:0}: Error finding container e0a352ee5a20a90ed149c63ac9ab832b7c1cd1ab23ad83cf228b0a80575677b8: Status 404 returned error can't find the container with id e0a352ee5a20a90ed149c63ac9ab832b7c1cd1ab23ad83cf228b0a80575677b8 Feb 17 18:05:50 crc kubenswrapper[4680]: I0217 18:05:50.528665 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"565450d6-6be1-41fd-973f-6f69adc10f85","Type":"ContainerStarted","Data":"a04822978b0390fa120576dce8ca45cd9239d856d0b2cb643e07a753a29d6c16"} Feb 17 18:05:50 crc kubenswrapper[4680]: I0217 18:05:50.529091 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"565450d6-6be1-41fd-973f-6f69adc10f85","Type":"ContainerStarted","Data":"fc5d9438da2e6637250eef701c5e7b0d364bf71153f5862571629fb59db756f2"} Feb 17 18:05:50 crc kubenswrapper[4680]: I0217 18:05:50.529107 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"565450d6-6be1-41fd-973f-6f69adc10f85","Type":"ContainerStarted","Data":"e0a352ee5a20a90ed149c63ac9ab832b7c1cd1ab23ad83cf228b0a80575677b8"} Feb 17 18:05:50 crc kubenswrapper[4680]: I0217 18:05:50.531711 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6c1fc559-7f55-4c70-b448-921d56e4a6c3","Type":"ContainerStarted","Data":"eacf15ee08f897a9ebc236e8f8792fd4f45bf54c516f497cdf03bd0c4d8d52d8"} Feb 17 18:05:50 crc kubenswrapper[4680]: I0217 18:05:50.531771 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6c1fc559-7f55-4c70-b448-921d56e4a6c3","Type":"ContainerStarted","Data":"4772d67cfa6b900b57d09ed4b4b56ad1cf32dc6b35ba89e27c9829c7c04559a5"} Feb 17 18:05:50 crc kubenswrapper[4680]: I0217 18:05:50.534872 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c4b1bad-b08f-4697-a925-b3b943394bdb","Type":"ContainerStarted","Data":"996bbd47430b4df308317486f5e5c639cf011159cdf56436f1012946d5ef8ede"} Feb 17 18:05:50 crc kubenswrapper[4680]: I0217 18:05:50.535060 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 18:05:50 crc kubenswrapper[4680]: I0217 18:05:50.566166 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.566143608 podStartE2EDuration="2.566143608s" podCreationTimestamp="2026-02-17 18:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:05:50.546608825 +0000 UTC m=+1280.097507504" watchObservedRunningTime="2026-02-17 18:05:50.566143608 +0000 UTC m=+1280.117042287" Feb 17 18:05:50 crc kubenswrapper[4680]: I0217 18:05:50.595637 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.595616063 podStartE2EDuration="2.595616063s" podCreationTimestamp="2026-02-17 18:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:05:50.580359885 +0000 UTC m=+1280.131258574" watchObservedRunningTime="2026-02-17 18:05:50.595616063 +0000 UTC m=+1280.146514742" Feb 17 18:05:50 crc kubenswrapper[4680]: I0217 18:05:50.616357 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.58702435 podStartE2EDuration="6.616338714s" podCreationTimestamp="2026-02-17 18:05:44 +0000 UTC" firstStartedPulling="2026-02-17 18:05:45.444697896 +0000 UTC m=+1274.995596575" lastFinishedPulling="2026-02-17 18:05:49.47401226 +0000 UTC m=+1279.024910939" observedRunningTime="2026-02-17 18:05:50.606964509 +0000 UTC m=+1280.157863198" watchObservedRunningTime="2026-02-17 18:05:50.616338714 +0000 UTC m=+1280.167237393" Feb 17 18:05:53 crc kubenswrapper[4680]: I0217 18:05:53.967897 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:54 crc kubenswrapper[4680]: I0217 18:05:54.329410 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 18:05:54 crc kubenswrapper[4680]: I0217 18:05:54.329463 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 18:05:57 crc kubenswrapper[4680]: I0217 18:05:57.766736 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 18:05:57 crc kubenswrapper[4680]: I0217 18:05:57.767661 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 18:05:57 crc kubenswrapper[4680]: I0217 18:05:57.774138 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 18:05:57 crc kubenswrapper[4680]: I0217 18:05:57.778898 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 18:05:58 crc kubenswrapper[4680]: I0217 18:05:58.602965 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 18:05:58 crc kubenswrapper[4680]: I0217 18:05:58.609940 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 18:05:58 crc kubenswrapper[4680]: I0217 18:05:58.957002 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:58 crc kubenswrapper[4680]: I0217 18:05:58.976576 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:59 crc kubenswrapper[4680]: I0217 18:05:59.329976 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 18:05:59 crc kubenswrapper[4680]: I0217 18:05:59.330037 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 18:05:59 crc kubenswrapper[4680]: I0217 18:05:59.633514 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 17 18:05:59 crc kubenswrapper[4680]: I0217 18:05:59.819697 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5pmz5"] Feb 17 18:05:59 crc kubenswrapper[4680]: I0217 18:05:59.826213 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5pmz5" Feb 17 18:05:59 crc kubenswrapper[4680]: I0217 18:05:59.828721 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 17 18:05:59 crc kubenswrapper[4680]: I0217 18:05:59.835597 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5pmz5"] Feb 17 18:05:59 crc kubenswrapper[4680]: I0217 18:05:59.845138 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 17 18:05:59 crc kubenswrapper[4680]: I0217 18:05:59.925450 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5pmz5\" (UID: \"aeb67070-334f-40f4-9fb4-789f761e50fc\") " pod="openstack/nova-cell1-cell-mapping-5pmz5" Feb 17 18:05:59 crc kubenswrapper[4680]: I0217 18:05:59.925741 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-scripts\") pod \"nova-cell1-cell-mapping-5pmz5\" (UID: \"aeb67070-334f-40f4-9fb4-789f761e50fc\") " pod="openstack/nova-cell1-cell-mapping-5pmz5" Feb 17 18:05:59 crc kubenswrapper[4680]: I0217 18:05:59.925861 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-config-data\") pod \"nova-cell1-cell-mapping-5pmz5\" (UID: \"aeb67070-334f-40f4-9fb4-789f761e50fc\") " pod="openstack/nova-cell1-cell-mapping-5pmz5" Feb 17 18:05:59 crc kubenswrapper[4680]: I0217 18:05:59.925903 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtc4v\" (UniqueName: \"kubernetes.io/projected/aeb67070-334f-40f4-9fb4-789f761e50fc-kube-api-access-jtc4v\") pod \"nova-cell1-cell-mapping-5pmz5\" (UID: \"aeb67070-334f-40f4-9fb4-789f761e50fc\") " pod="openstack/nova-cell1-cell-mapping-5pmz5" Feb 17 18:06:00 crc kubenswrapper[4680]: I0217 18:06:00.028182 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5pmz5\" (UID: \"aeb67070-334f-40f4-9fb4-789f761e50fc\") " pod="openstack/nova-cell1-cell-mapping-5pmz5" Feb 17 18:06:00 crc kubenswrapper[4680]: I0217 18:06:00.028306 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-scripts\") pod \"nova-cell1-cell-mapping-5pmz5\" (UID: \"aeb67070-334f-40f4-9fb4-789f761e50fc\") " pod="openstack/nova-cell1-cell-mapping-5pmz5" Feb 17 18:06:00 crc kubenswrapper[4680]: I0217 18:06:00.028391 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-config-data\") pod \"nova-cell1-cell-mapping-5pmz5\" (UID: \"aeb67070-334f-40f4-9fb4-789f761e50fc\") " pod="openstack/nova-cell1-cell-mapping-5pmz5" Feb 17 18:06:00 crc kubenswrapper[4680]: I0217 18:06:00.028417 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtc4v\" (UniqueName: \"kubernetes.io/projected/aeb67070-334f-40f4-9fb4-789f761e50fc-kube-api-access-jtc4v\") pod \"nova-cell1-cell-mapping-5pmz5\" (UID: \"aeb67070-334f-40f4-9fb4-789f761e50fc\") " pod="openstack/nova-cell1-cell-mapping-5pmz5" Feb 17 18:06:00 crc kubenswrapper[4680]: I0217 18:06:00.035613 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-config-data\") pod \"nova-cell1-cell-mapping-5pmz5\" (UID: \"aeb67070-334f-40f4-9fb4-789f761e50fc\") " pod="openstack/nova-cell1-cell-mapping-5pmz5" Feb 17 18:06:00 crc kubenswrapper[4680]: I0217 18:06:00.036003 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-scripts\") pod \"nova-cell1-cell-mapping-5pmz5\" (UID: \"aeb67070-334f-40f4-9fb4-789f761e50fc\") " pod="openstack/nova-cell1-cell-mapping-5pmz5" Feb 17 18:06:00 crc kubenswrapper[4680]: I0217 18:06:00.037654 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5pmz5\" (UID: \"aeb67070-334f-40f4-9fb4-789f761e50fc\") " pod="openstack/nova-cell1-cell-mapping-5pmz5" Feb 17 18:06:00 crc kubenswrapper[4680]: I0217 18:06:00.051204 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtc4v\" (UniqueName: \"kubernetes.io/projected/aeb67070-334f-40f4-9fb4-789f761e50fc-kube-api-access-jtc4v\") pod \"nova-cell1-cell-mapping-5pmz5\" (UID: \"aeb67070-334f-40f4-9fb4-789f761e50fc\") " pod="openstack/nova-cell1-cell-mapping-5pmz5" Feb 17 18:06:00 crc kubenswrapper[4680]: I0217 18:06:00.152603 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5pmz5" Feb 17 18:06:00 crc kubenswrapper[4680]: I0217 18:06:00.388830 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="565450d6-6be1-41fd-973f-6f69adc10f85" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 18:06:00 crc kubenswrapper[4680]: I0217 18:06:00.389057 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="565450d6-6be1-41fd-973f-6f69adc10f85" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 18:06:00 crc kubenswrapper[4680]: I0217 18:06:00.731677 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5pmz5"] Feb 17 18:06:01 crc kubenswrapper[4680]: I0217 18:06:01.629991 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5pmz5" event={"ID":"aeb67070-334f-40f4-9fb4-789f761e50fc","Type":"ContainerStarted","Data":"a322e20a7b28592e732e0fb47b521397de5badc470bdbcdd12c691efb8633f2c"} Feb 17 18:06:01 crc kubenswrapper[4680]: I0217 18:06:01.630620 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5pmz5" event={"ID":"aeb67070-334f-40f4-9fb4-789f761e50fc","Type":"ContainerStarted","Data":"9f0dd38c97e713d6d7d13d9fdfa8570058b9976da0ea2584a4d49622ae8eed8f"} Feb 17 18:06:01 crc kubenswrapper[4680]: I0217 18:06:01.654963 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5pmz5" podStartSLOduration=2.654942332 podStartE2EDuration="2.654942332s" podCreationTimestamp="2026-02-17 18:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:06:01.645825385 +0000 UTC m=+1291.196724064" watchObservedRunningTime="2026-02-17 18:06:01.654942332 +0000 UTC m=+1291.205841011" Feb 17 18:06:05 crc kubenswrapper[4680]: I0217 18:06:05.588411 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:06:05 crc kubenswrapper[4680]: I0217 18:06:05.588992 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:06:07 crc kubenswrapper[4680]: I0217 18:06:07.681505 4680 generic.go:334] "Generic (PLEG): container finished" podID="aeb67070-334f-40f4-9fb4-789f761e50fc" containerID="a322e20a7b28592e732e0fb47b521397de5badc470bdbcdd12c691efb8633f2c" exitCode=0 Feb 17 18:06:07 crc kubenswrapper[4680]: I0217 18:06:07.681570 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5pmz5" event={"ID":"aeb67070-334f-40f4-9fb4-789f761e50fc","Type":"ContainerDied","Data":"a322e20a7b28592e732e0fb47b521397de5badc470bdbcdd12c691efb8633f2c"} Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.138200 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5pmz5" Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.259022 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-scripts\") pod \"aeb67070-334f-40f4-9fb4-789f761e50fc\" (UID: \"aeb67070-334f-40f4-9fb4-789f761e50fc\") " Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.259193 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-config-data\") pod \"aeb67070-334f-40f4-9fb4-789f761e50fc\" (UID: \"aeb67070-334f-40f4-9fb4-789f761e50fc\") " Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.259348 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtc4v\" (UniqueName: \"kubernetes.io/projected/aeb67070-334f-40f4-9fb4-789f761e50fc-kube-api-access-jtc4v\") pod \"aeb67070-334f-40f4-9fb4-789f761e50fc\" (UID: \"aeb67070-334f-40f4-9fb4-789f761e50fc\") " Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.259381 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-combined-ca-bundle\") pod \"aeb67070-334f-40f4-9fb4-789f761e50fc\" (UID: \"aeb67070-334f-40f4-9fb4-789f761e50fc\") " Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.265460 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-scripts" (OuterVolumeSpecName: "scripts") pod "aeb67070-334f-40f4-9fb4-789f761e50fc" (UID: "aeb67070-334f-40f4-9fb4-789f761e50fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.266027 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb67070-334f-40f4-9fb4-789f761e50fc-kube-api-access-jtc4v" (OuterVolumeSpecName: "kube-api-access-jtc4v") pod "aeb67070-334f-40f4-9fb4-789f761e50fc" (UID: "aeb67070-334f-40f4-9fb4-789f761e50fc"). InnerVolumeSpecName "kube-api-access-jtc4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.287978 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aeb67070-334f-40f4-9fb4-789f761e50fc" (UID: "aeb67070-334f-40f4-9fb4-789f761e50fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.289550 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-config-data" (OuterVolumeSpecName: "config-data") pod "aeb67070-334f-40f4-9fb4-789f761e50fc" (UID: "aeb67070-334f-40f4-9fb4-789f761e50fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.337026 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.337097 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.350626 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.351024 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.362962 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.362989 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtc4v\" (UniqueName: \"kubernetes.io/projected/aeb67070-334f-40f4-9fb4-789f761e50fc-kube-api-access-jtc4v\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.362998 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.363007 4680 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb67070-334f-40f4-9fb4-789f761e50fc-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.702459 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5pmz5" Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.702620 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5pmz5" event={"ID":"aeb67070-334f-40f4-9fb4-789f761e50fc","Type":"ContainerDied","Data":"9f0dd38c97e713d6d7d13d9fdfa8570058b9976da0ea2584a4d49622ae8eed8f"} Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.702682 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f0dd38c97e713d6d7d13d9fdfa8570058b9976da0ea2584a4d49622ae8eed8f" Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.809635 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.810202 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eb18be12-e29f-410a-9ada-860058113aa3" containerName="nova-api-api" containerID="cri-o://bbdbf93767e11f09f62270c799255d7bed2c2e01a523af36b8098fb007f80b93" gracePeriod=30 Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.810269 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eb18be12-e29f-410a-9ada-860058113aa3" containerName="nova-api-log" containerID="cri-o://abae1c75d44842abbefd85ec7fe45257bcddc24a76e73e65bd47c51fc0458442" gracePeriod=30 Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.836198 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.836470 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="725cb326-b867-449a-baca-95ca7093c415" containerName="nova-scheduler-scheduler" containerID="cri-o://a21761f29ca820d913bfa7243f7ac5c6e7117b43a163254cfec6d96484061bd1" gracePeriod=30 Feb 17 18:06:09 crc kubenswrapper[4680]: I0217 18:06:09.900917 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 18:06:10 crc kubenswrapper[4680]: I0217 18:06:10.711639 4680 generic.go:334] "Generic (PLEG): container finished" podID="eb18be12-e29f-410a-9ada-860058113aa3" containerID="abae1c75d44842abbefd85ec7fe45257bcddc24a76e73e65bd47c51fc0458442" exitCode=143 Feb 17 18:06:10 crc kubenswrapper[4680]: I0217 18:06:10.711770 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb18be12-e29f-410a-9ada-860058113aa3","Type":"ContainerDied","Data":"abae1c75d44842abbefd85ec7fe45257bcddc24a76e73e65bd47c51fc0458442"} Feb 17 18:06:11 crc kubenswrapper[4680]: I0217 18:06:11.720503 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="565450d6-6be1-41fd-973f-6f69adc10f85" containerName="nova-metadata-log" containerID="cri-o://fc5d9438da2e6637250eef701c5e7b0d364bf71153f5862571629fb59db756f2" gracePeriod=30 Feb 17 18:06:11 crc kubenswrapper[4680]: I0217 18:06:11.720826 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="565450d6-6be1-41fd-973f-6f69adc10f85" containerName="nova-metadata-metadata" containerID="cri-o://a04822978b0390fa120576dce8ca45cd9239d856d0b2cb643e07a753a29d6c16" gracePeriod=30 Feb 17 18:06:12 crc kubenswrapper[4680]: I0217 18:06:12.729360 4680 generic.go:334] "Generic (PLEG): container finished" podID="565450d6-6be1-41fd-973f-6f69adc10f85" containerID="fc5d9438da2e6637250eef701c5e7b0d364bf71153f5862571629fb59db756f2" exitCode=143 Feb 17 18:06:12 crc kubenswrapper[4680]: I0217 18:06:12.729463 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"565450d6-6be1-41fd-973f-6f69adc10f85","Type":"ContainerDied","Data":"fc5d9438da2e6637250eef701c5e7b0d364bf71153f5862571629fb59db756f2"} Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.627191 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.643061 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-combined-ca-bundle\") pod \"eb18be12-e29f-410a-9ada-860058113aa3\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.643147 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-config-data\") pod \"eb18be12-e29f-410a-9ada-860058113aa3\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.643173 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-public-tls-certs\") pod \"eb18be12-e29f-410a-9ada-860058113aa3\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.643203 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-internal-tls-certs\") pod \"eb18be12-e29f-410a-9ada-860058113aa3\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.643239 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4bsk\" (UniqueName: \"kubernetes.io/projected/eb18be12-e29f-410a-9ada-860058113aa3-kube-api-access-s4bsk\") pod \"eb18be12-e29f-410a-9ada-860058113aa3\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.643283 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb18be12-e29f-410a-9ada-860058113aa3-logs\") pod \"eb18be12-e29f-410a-9ada-860058113aa3\" (UID: \"eb18be12-e29f-410a-9ada-860058113aa3\") " Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.643958 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb18be12-e29f-410a-9ada-860058113aa3-logs" (OuterVolumeSpecName: "logs") pod "eb18be12-e29f-410a-9ada-860058113aa3" (UID: "eb18be12-e29f-410a-9ada-860058113aa3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.651700 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb18be12-e29f-410a-9ada-860058113aa3-kube-api-access-s4bsk" (OuterVolumeSpecName: "kube-api-access-s4bsk") pod "eb18be12-e29f-410a-9ada-860058113aa3" (UID: "eb18be12-e29f-410a-9ada-860058113aa3"). InnerVolumeSpecName "kube-api-access-s4bsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.688732 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb18be12-e29f-410a-9ada-860058113aa3" (UID: "eb18be12-e29f-410a-9ada-860058113aa3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.715671 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-config-data" (OuterVolumeSpecName: "config-data") pod "eb18be12-e29f-410a-9ada-860058113aa3" (UID: "eb18be12-e29f-410a-9ada-860058113aa3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.752302 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.752367 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.752380 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4bsk\" (UniqueName: \"kubernetes.io/projected/eb18be12-e29f-410a-9ada-860058113aa3-kube-api-access-s4bsk\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.755483 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb18be12-e29f-410a-9ada-860058113aa3-logs\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.771066 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "eb18be12-e29f-410a-9ada-860058113aa3" (UID: "eb18be12-e29f-410a-9ada-860058113aa3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.777690 4680 generic.go:334] "Generic (PLEG): container finished" podID="eb18be12-e29f-410a-9ada-860058113aa3" containerID="bbdbf93767e11f09f62270c799255d7bed2c2e01a523af36b8098fb007f80b93" exitCode=0 Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.777776 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb18be12-e29f-410a-9ada-860058113aa3","Type":"ContainerDied","Data":"bbdbf93767e11f09f62270c799255d7bed2c2e01a523af36b8098fb007f80b93"} Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.777829 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb18be12-e29f-410a-9ada-860058113aa3","Type":"ContainerDied","Data":"2b74df591a7b26aeaac45f9a58e7310a9d8b5f04a754009d3d7ca7e412415331"} Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.777851 4680 scope.go:117] "RemoveContainer" containerID="bbdbf93767e11f09f62270c799255d7bed2c2e01a523af36b8098fb007f80b93" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.778033 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 18:06:13 crc kubenswrapper[4680]: E0217 18:06:13.780308 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a21761f29ca820d913bfa7243f7ac5c6e7117b43a163254cfec6d96484061bd1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 18:06:13 crc kubenswrapper[4680]: E0217 18:06:13.782882 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a21761f29ca820d913bfa7243f7ac5c6e7117b43a163254cfec6d96484061bd1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 18:06:13 crc kubenswrapper[4680]: E0217 18:06:13.794589 4680 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a21761f29ca820d913bfa7243f7ac5c6e7117b43a163254cfec6d96484061bd1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 18:06:13 crc kubenswrapper[4680]: E0217 18:06:13.797674 4680 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="725cb326-b867-449a-baca-95ca7093c415" containerName="nova-scheduler-scheduler" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.827797 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eb18be12-e29f-410a-9ada-860058113aa3" (UID: "eb18be12-e29f-410a-9ada-860058113aa3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.829066 4680 scope.go:117] "RemoveContainer" containerID="abae1c75d44842abbefd85ec7fe45257bcddc24a76e73e65bd47c51fc0458442" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.848134 4680 scope.go:117] "RemoveContainer" containerID="bbdbf93767e11f09f62270c799255d7bed2c2e01a523af36b8098fb007f80b93" Feb 17 18:06:13 crc kubenswrapper[4680]: E0217 18:06:13.848633 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbdbf93767e11f09f62270c799255d7bed2c2e01a523af36b8098fb007f80b93\": container with ID starting with bbdbf93767e11f09f62270c799255d7bed2c2e01a523af36b8098fb007f80b93 not found: ID does not exist" containerID="bbdbf93767e11f09f62270c799255d7bed2c2e01a523af36b8098fb007f80b93" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.848694 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbdbf93767e11f09f62270c799255d7bed2c2e01a523af36b8098fb007f80b93"} err="failed to get container status \"bbdbf93767e11f09f62270c799255d7bed2c2e01a523af36b8098fb007f80b93\": rpc error: code = NotFound desc = could not find container \"bbdbf93767e11f09f62270c799255d7bed2c2e01a523af36b8098fb007f80b93\": container with ID starting with bbdbf93767e11f09f62270c799255d7bed2c2e01a523af36b8098fb007f80b93 not found: ID does not exist" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.848952 4680 scope.go:117] "RemoveContainer" containerID="abae1c75d44842abbefd85ec7fe45257bcddc24a76e73e65bd47c51fc0458442" Feb 17 18:06:13 crc kubenswrapper[4680]: E0217 18:06:13.849446 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abae1c75d44842abbefd85ec7fe45257bcddc24a76e73e65bd47c51fc0458442\": container with ID starting with abae1c75d44842abbefd85ec7fe45257bcddc24a76e73e65bd47c51fc0458442 not found: ID does not exist" containerID="abae1c75d44842abbefd85ec7fe45257bcddc24a76e73e65bd47c51fc0458442" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.849513 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abae1c75d44842abbefd85ec7fe45257bcddc24a76e73e65bd47c51fc0458442"} err="failed to get container status \"abae1c75d44842abbefd85ec7fe45257bcddc24a76e73e65bd47c51fc0458442\": rpc error: code = NotFound desc = could not find container \"abae1c75d44842abbefd85ec7fe45257bcddc24a76e73e65bd47c51fc0458442\": container with ID starting with abae1c75d44842abbefd85ec7fe45257bcddc24a76e73e65bd47c51fc0458442 not found: ID does not exist" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.857829 4680 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:13 crc kubenswrapper[4680]: I0217 18:06:13.857869 4680 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb18be12-e29f-410a-9ada-860058113aa3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.117217 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.128185 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.149573 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 18:06:14 crc kubenswrapper[4680]: E0217 18:06:14.150068 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb18be12-e29f-410a-9ada-860058113aa3" containerName="nova-api-api" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.150094 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb18be12-e29f-410a-9ada-860058113aa3" containerName="nova-api-api" Feb 17 18:06:14 crc kubenswrapper[4680]: E0217 18:06:14.150115 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb18be12-e29f-410a-9ada-860058113aa3" containerName="nova-api-log" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.150126 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb18be12-e29f-410a-9ada-860058113aa3" containerName="nova-api-log" Feb 17 18:06:14 crc kubenswrapper[4680]: E0217 18:06:14.150154 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb67070-334f-40f4-9fb4-789f761e50fc" containerName="nova-manage" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.150166 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb67070-334f-40f4-9fb4-789f761e50fc" containerName="nova-manage" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.150412 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb67070-334f-40f4-9fb4-789f761e50fc" containerName="nova-manage" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.150428 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb18be12-e29f-410a-9ada-860058113aa3" containerName="nova-api-log" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.150439 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb18be12-e29f-410a-9ada-860058113aa3" containerName="nova-api-api" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.151730 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.159811 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.159832 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.160061 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.162272 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz2ll\" (UniqueName: \"kubernetes.io/projected/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-kube-api-access-vz2ll\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.162355 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.162416 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.162439 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-config-data\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.162463 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-logs\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.162620 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-public-tls-certs\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.171915 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.264382 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz2ll\" (UniqueName: \"kubernetes.io/projected/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-kube-api-access-vz2ll\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.264426 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.264450 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.264470 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-config-data\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.264487 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-logs\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.264588 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-public-tls-certs\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.265299 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-logs\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.268478 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.268599 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-public-tls-certs\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.268638 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.270551 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-config-data\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.285995 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz2ll\" (UniqueName: \"kubernetes.io/projected/884f4451-a521-4e0f-9fd8-5f4e9c094fb8-kube-api-access-vz2ll\") pod \"nova-api-0\" (UID: \"884f4451-a521-4e0f-9fd8-5f4e9c094fb8\") " pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.473778 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.811682 4680 generic.go:334] "Generic (PLEG): container finished" podID="725cb326-b867-449a-baca-95ca7093c415" containerID="a21761f29ca820d913bfa7243f7ac5c6e7117b43a163254cfec6d96484061bd1" exitCode=0 Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.811762 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"725cb326-b867-449a-baca-95ca7093c415","Type":"ContainerDied","Data":"a21761f29ca820d913bfa7243f7ac5c6e7117b43a163254cfec6d96484061bd1"} Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.865349 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="565450d6-6be1-41fd-973f-6f69adc10f85" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:55900->10.217.0.197:8775: read: connection reset by peer" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.865411 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="565450d6-6be1-41fd-973f-6f69adc10f85" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:55884->10.217.0.197:8775: read: connection reset by peer" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.896037 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 18:06:14 crc kubenswrapper[4680]: I0217 18:06:14.995268 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 18:06:15 crc kubenswrapper[4680]: W0217 18:06:15.020465 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod884f4451_a521_4e0f_9fd8_5f4e9c094fb8.slice/crio-08bcf39938fc797123f4040fbb26410cdd88e203746f60797a768d69699dc5bc WatchSource:0}: Error finding container 08bcf39938fc797123f4040fbb26410cdd88e203746f60797a768d69699dc5bc: Status 404 returned error can't find the container with id 08bcf39938fc797123f4040fbb26410cdd88e203746f60797a768d69699dc5bc Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.021935 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.116951 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb18be12-e29f-410a-9ada-860058113aa3" path="/var/lib/kubelet/pods/eb18be12-e29f-410a-9ada-860058113aa3/volumes" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.205563 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/725cb326-b867-449a-baca-95ca7093c415-config-data\") pod \"725cb326-b867-449a-baca-95ca7093c415\" (UID: \"725cb326-b867-449a-baca-95ca7093c415\") " Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.205989 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/725cb326-b867-449a-baca-95ca7093c415-combined-ca-bundle\") pod \"725cb326-b867-449a-baca-95ca7093c415\" (UID: \"725cb326-b867-449a-baca-95ca7093c415\") " Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.206198 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm685\" (UniqueName: \"kubernetes.io/projected/725cb326-b867-449a-baca-95ca7093c415-kube-api-access-fm685\") pod \"725cb326-b867-449a-baca-95ca7093c415\" (UID: \"725cb326-b867-449a-baca-95ca7093c415\") " Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.224352 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/725cb326-b867-449a-baca-95ca7093c415-kube-api-access-fm685" (OuterVolumeSpecName: "kube-api-access-fm685") pod "725cb326-b867-449a-baca-95ca7093c415" (UID: "725cb326-b867-449a-baca-95ca7093c415"). InnerVolumeSpecName "kube-api-access-fm685". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.265925 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/725cb326-b867-449a-baca-95ca7093c415-config-data" (OuterVolumeSpecName: "config-data") pod "725cb326-b867-449a-baca-95ca7093c415" (UID: "725cb326-b867-449a-baca-95ca7093c415"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.277200 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/725cb326-b867-449a-baca-95ca7093c415-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "725cb326-b867-449a-baca-95ca7093c415" (UID: "725cb326-b867-449a-baca-95ca7093c415"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.308174 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm685\" (UniqueName: \"kubernetes.io/projected/725cb326-b867-449a-baca-95ca7093c415-kube-api-access-fm685\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.308219 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/725cb326-b867-449a-baca-95ca7093c415-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.308231 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/725cb326-b867-449a-baca-95ca7093c415-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.389973 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.511574 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-nova-metadata-tls-certs\") pod \"565450d6-6be1-41fd-973f-6f69adc10f85\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.511673 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-combined-ca-bundle\") pod \"565450d6-6be1-41fd-973f-6f69adc10f85\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.511742 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vhmn\" (UniqueName: \"kubernetes.io/projected/565450d6-6be1-41fd-973f-6f69adc10f85-kube-api-access-9vhmn\") pod \"565450d6-6be1-41fd-973f-6f69adc10f85\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.511934 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/565450d6-6be1-41fd-973f-6f69adc10f85-logs\") pod \"565450d6-6be1-41fd-973f-6f69adc10f85\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.512016 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-config-data\") pod \"565450d6-6be1-41fd-973f-6f69adc10f85\" (UID: \"565450d6-6be1-41fd-973f-6f69adc10f85\") " Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.514067 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565450d6-6be1-41fd-973f-6f69adc10f85-logs" (OuterVolumeSpecName: "logs") pod "565450d6-6be1-41fd-973f-6f69adc10f85" (UID: "565450d6-6be1-41fd-973f-6f69adc10f85"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.524861 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565450d6-6be1-41fd-973f-6f69adc10f85-kube-api-access-9vhmn" (OuterVolumeSpecName: "kube-api-access-9vhmn") pod "565450d6-6be1-41fd-973f-6f69adc10f85" (UID: "565450d6-6be1-41fd-973f-6f69adc10f85"). InnerVolumeSpecName "kube-api-access-9vhmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.610452 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "565450d6-6be1-41fd-973f-6f69adc10f85" (UID: "565450d6-6be1-41fd-973f-6f69adc10f85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.613129 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-config-data" (OuterVolumeSpecName: "config-data") pod "565450d6-6be1-41fd-973f-6f69adc10f85" (UID: "565450d6-6be1-41fd-973f-6f69adc10f85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.614613 4680 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/565450d6-6be1-41fd-973f-6f69adc10f85-logs\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.614630 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.614640 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.614653 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vhmn\" (UniqueName: \"kubernetes.io/projected/565450d6-6be1-41fd-973f-6f69adc10f85-kube-api-access-9vhmn\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.614666 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "565450d6-6be1-41fd-973f-6f69adc10f85" (UID: "565450d6-6be1-41fd-973f-6f69adc10f85"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.716287 4680 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/565450d6-6be1-41fd-973f-6f69adc10f85-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.824656 4680 generic.go:334] "Generic (PLEG): container finished" podID="565450d6-6be1-41fd-973f-6f69adc10f85" containerID="a04822978b0390fa120576dce8ca45cd9239d856d0b2cb643e07a753a29d6c16" exitCode=0 Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.824729 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"565450d6-6be1-41fd-973f-6f69adc10f85","Type":"ContainerDied","Data":"a04822978b0390fa120576dce8ca45cd9239d856d0b2cb643e07a753a29d6c16"} Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.824763 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"565450d6-6be1-41fd-973f-6f69adc10f85","Type":"ContainerDied","Data":"e0a352ee5a20a90ed149c63ac9ab832b7c1cd1ab23ad83cf228b0a80575677b8"} Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.824785 4680 scope.go:117] "RemoveContainer" containerID="a04822978b0390fa120576dce8ca45cd9239d856d0b2cb643e07a753a29d6c16" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.824798 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.837096 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"884f4451-a521-4e0f-9fd8-5f4e9c094fb8","Type":"ContainerStarted","Data":"f6c8884a342c2fc3492c1b3379ef082f8170d13bf83bb96da741e4cf6164a611"} Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.837147 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"884f4451-a521-4e0f-9fd8-5f4e9c094fb8","Type":"ContainerStarted","Data":"9f7fe00400e2731a8870adda96ea27d3843302cf25383c7db5adf00d979a9f68"} Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.837161 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"884f4451-a521-4e0f-9fd8-5f4e9c094fb8","Type":"ContainerStarted","Data":"08bcf39938fc797123f4040fbb26410cdd88e203746f60797a768d69699dc5bc"} Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.876047 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"725cb326-b867-449a-baca-95ca7093c415","Type":"ContainerDied","Data":"8a8addec9ffeba819d71ca9b932bfaf41384b7c04aa1bde13cce4ec3b8f73acb"} Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.876150 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.881828 4680 scope.go:117] "RemoveContainer" containerID="fc5d9438da2e6637250eef701c5e7b0d364bf71153f5862571629fb59db756f2" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.906768 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.906747433 podStartE2EDuration="1.906747433s" podCreationTimestamp="2026-02-17 18:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:06:15.872690134 +0000 UTC m=+1305.423588813" watchObservedRunningTime="2026-02-17 18:06:15.906747433 +0000 UTC m=+1305.457646112" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.908659 4680 scope.go:117] "RemoveContainer" containerID="a04822978b0390fa120576dce8ca45cd9239d856d0b2cb643e07a753a29d6c16" Feb 17 18:06:15 crc kubenswrapper[4680]: E0217 18:06:15.909838 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a04822978b0390fa120576dce8ca45cd9239d856d0b2cb643e07a753a29d6c16\": container with ID starting with a04822978b0390fa120576dce8ca45cd9239d856d0b2cb643e07a753a29d6c16 not found: ID does not exist" containerID="a04822978b0390fa120576dce8ca45cd9239d856d0b2cb643e07a753a29d6c16" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.909869 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a04822978b0390fa120576dce8ca45cd9239d856d0b2cb643e07a753a29d6c16"} err="failed to get container status \"a04822978b0390fa120576dce8ca45cd9239d856d0b2cb643e07a753a29d6c16\": rpc error: code = NotFound desc = could not find container \"a04822978b0390fa120576dce8ca45cd9239d856d0b2cb643e07a753a29d6c16\": container with ID starting with a04822978b0390fa120576dce8ca45cd9239d856d0b2cb643e07a753a29d6c16 not found: ID does not exist" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.910054 4680 scope.go:117] "RemoveContainer" containerID="fc5d9438da2e6637250eef701c5e7b0d364bf71153f5862571629fb59db756f2" Feb 17 18:06:15 crc kubenswrapper[4680]: E0217 18:06:15.910551 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc5d9438da2e6637250eef701c5e7b0d364bf71153f5862571629fb59db756f2\": container with ID starting with fc5d9438da2e6637250eef701c5e7b0d364bf71153f5862571629fb59db756f2 not found: ID does not exist" containerID="fc5d9438da2e6637250eef701c5e7b0d364bf71153f5862571629fb59db756f2" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.910597 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc5d9438da2e6637250eef701c5e7b0d364bf71153f5862571629fb59db756f2"} err="failed to get container status \"fc5d9438da2e6637250eef701c5e7b0d364bf71153f5862571629fb59db756f2\": rpc error: code = NotFound desc = could not find container \"fc5d9438da2e6637250eef701c5e7b0d364bf71153f5862571629fb59db756f2\": container with ID starting with fc5d9438da2e6637250eef701c5e7b0d364bf71153f5862571629fb59db756f2 not found: ID does not exist" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.910625 4680 scope.go:117] "RemoveContainer" containerID="a21761f29ca820d913bfa7243f7ac5c6e7117b43a163254cfec6d96484061bd1" Feb 17 18:06:15 crc kubenswrapper[4680]: I0217 18:06:15.951467 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.007262 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.042112 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.051183 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.060855 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 18:06:16 crc kubenswrapper[4680]: E0217 18:06:16.061271 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565450d6-6be1-41fd-973f-6f69adc10f85" containerName="nova-metadata-log" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.061299 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="565450d6-6be1-41fd-973f-6f69adc10f85" containerName="nova-metadata-log" Feb 17 18:06:16 crc kubenswrapper[4680]: E0217 18:06:16.061355 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565450d6-6be1-41fd-973f-6f69adc10f85" containerName="nova-metadata-metadata" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.061364 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="565450d6-6be1-41fd-973f-6f69adc10f85" containerName="nova-metadata-metadata" Feb 17 18:06:16 crc kubenswrapper[4680]: E0217 18:06:16.061381 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="725cb326-b867-449a-baca-95ca7093c415" containerName="nova-scheduler-scheduler" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.061389 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="725cb326-b867-449a-baca-95ca7093c415" containerName="nova-scheduler-scheduler" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.061572 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="565450d6-6be1-41fd-973f-6f69adc10f85" containerName="nova-metadata-log" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.061593 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="565450d6-6be1-41fd-973f-6f69adc10f85" containerName="nova-metadata-metadata" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.061605 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="725cb326-b867-449a-baca-95ca7093c415" containerName="nova-scheduler-scheduler" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.062520 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.065346 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.066158 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.081123 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.098266 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.099529 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.101645 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.114471 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.225810 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e325136c-a30f-4f22-aecf-69319458b18d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e325136c-a30f-4f22-aecf-69319458b18d\") " pod="openstack/nova-metadata-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.225920 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bf0e76-39eb-44b7-876e-76e06523f968-config-data\") pod \"nova-scheduler-0\" (UID: \"13bf0e76-39eb-44b7-876e-76e06523f968\") " pod="openstack/nova-scheduler-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.225949 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e325136c-a30f-4f22-aecf-69319458b18d-logs\") pod \"nova-metadata-0\" (UID: \"e325136c-a30f-4f22-aecf-69319458b18d\") " pod="openstack/nova-metadata-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.226016 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e325136c-a30f-4f22-aecf-69319458b18d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e325136c-a30f-4f22-aecf-69319458b18d\") " pod="openstack/nova-metadata-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.226088 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9c5h\" (UniqueName: \"kubernetes.io/projected/e325136c-a30f-4f22-aecf-69319458b18d-kube-api-access-l9c5h\") pod \"nova-metadata-0\" (UID: \"e325136c-a30f-4f22-aecf-69319458b18d\") " pod="openstack/nova-metadata-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.226107 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bf0e76-39eb-44b7-876e-76e06523f968-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"13bf0e76-39eb-44b7-876e-76e06523f968\") " pod="openstack/nova-scheduler-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.226128 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qszj\" (UniqueName: \"kubernetes.io/projected/13bf0e76-39eb-44b7-876e-76e06523f968-kube-api-access-5qszj\") pod \"nova-scheduler-0\" (UID: \"13bf0e76-39eb-44b7-876e-76e06523f968\") " pod="openstack/nova-scheduler-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.226155 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e325136c-a30f-4f22-aecf-69319458b18d-config-data\") pod \"nova-metadata-0\" (UID: \"e325136c-a30f-4f22-aecf-69319458b18d\") " pod="openstack/nova-metadata-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.328441 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e325136c-a30f-4f22-aecf-69319458b18d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e325136c-a30f-4f22-aecf-69319458b18d\") " pod="openstack/nova-metadata-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.328804 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bf0e76-39eb-44b7-876e-76e06523f968-config-data\") pod \"nova-scheduler-0\" (UID: \"13bf0e76-39eb-44b7-876e-76e06523f968\") " pod="openstack/nova-scheduler-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.329443 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e325136c-a30f-4f22-aecf-69319458b18d-logs\") pod \"nova-metadata-0\" (UID: \"e325136c-a30f-4f22-aecf-69319458b18d\") " pod="openstack/nova-metadata-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.329675 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e325136c-a30f-4f22-aecf-69319458b18d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e325136c-a30f-4f22-aecf-69319458b18d\") " pod="openstack/nova-metadata-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.329879 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9c5h\" (UniqueName: \"kubernetes.io/projected/e325136c-a30f-4f22-aecf-69319458b18d-kube-api-access-l9c5h\") pod \"nova-metadata-0\" (UID: \"e325136c-a30f-4f22-aecf-69319458b18d\") " pod="openstack/nova-metadata-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.329927 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e325136c-a30f-4f22-aecf-69319458b18d-logs\") pod \"nova-metadata-0\" (UID: \"e325136c-a30f-4f22-aecf-69319458b18d\") " pod="openstack/nova-metadata-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.329977 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bf0e76-39eb-44b7-876e-76e06523f968-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"13bf0e76-39eb-44b7-876e-76e06523f968\") " pod="openstack/nova-scheduler-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.330082 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qszj\" (UniqueName: \"kubernetes.io/projected/13bf0e76-39eb-44b7-876e-76e06523f968-kube-api-access-5qszj\") pod \"nova-scheduler-0\" (UID: \"13bf0e76-39eb-44b7-876e-76e06523f968\") " pod="openstack/nova-scheduler-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.330178 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e325136c-a30f-4f22-aecf-69319458b18d-config-data\") pod \"nova-metadata-0\" (UID: \"e325136c-a30f-4f22-aecf-69319458b18d\") " pod="openstack/nova-metadata-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.333065 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e325136c-a30f-4f22-aecf-69319458b18d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e325136c-a30f-4f22-aecf-69319458b18d\") " pod="openstack/nova-metadata-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.333078 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bf0e76-39eb-44b7-876e-76e06523f968-config-data\") pod \"nova-scheduler-0\" (UID: \"13bf0e76-39eb-44b7-876e-76e06523f968\") " pod="openstack/nova-scheduler-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.336425 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e325136c-a30f-4f22-aecf-69319458b18d-config-data\") pod \"nova-metadata-0\" (UID: \"e325136c-a30f-4f22-aecf-69319458b18d\") " pod="openstack/nova-metadata-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.337157 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e325136c-a30f-4f22-aecf-69319458b18d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e325136c-a30f-4f22-aecf-69319458b18d\") " pod="openstack/nova-metadata-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.337623 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bf0e76-39eb-44b7-876e-76e06523f968-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"13bf0e76-39eb-44b7-876e-76e06523f968\") " pod="openstack/nova-scheduler-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.360760 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qszj\" (UniqueName: \"kubernetes.io/projected/13bf0e76-39eb-44b7-876e-76e06523f968-kube-api-access-5qszj\") pod \"nova-scheduler-0\" (UID: \"13bf0e76-39eb-44b7-876e-76e06523f968\") " pod="openstack/nova-scheduler-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.363285 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9c5h\" (UniqueName: \"kubernetes.io/projected/e325136c-a30f-4f22-aecf-69319458b18d-kube-api-access-l9c5h\") pod \"nova-metadata-0\" (UID: \"e325136c-a30f-4f22-aecf-69319458b18d\") " pod="openstack/nova-metadata-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.377482 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.423063 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 18:06:16 crc kubenswrapper[4680]: I0217 18:06:16.906124 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 18:06:16 crc kubenswrapper[4680]: W0217 18:06:16.912486 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode325136c_a30f_4f22_aecf_69319458b18d.slice/crio-0852ab391cca66f461df13714f2dfd559650cf7ba374850ca546a60836b20943 WatchSource:0}: Error finding container 0852ab391cca66f461df13714f2dfd559650cf7ba374850ca546a60836b20943: Status 404 returned error can't find the container with id 0852ab391cca66f461df13714f2dfd559650cf7ba374850ca546a60836b20943 Feb 17 18:06:17 crc kubenswrapper[4680]: I0217 18:06:17.011041 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 18:06:17 crc kubenswrapper[4680]: I0217 18:06:17.116547 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="565450d6-6be1-41fd-973f-6f69adc10f85" path="/var/lib/kubelet/pods/565450d6-6be1-41fd-973f-6f69adc10f85/volumes" Feb 17 18:06:17 crc kubenswrapper[4680]: I0217 18:06:17.117170 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="725cb326-b867-449a-baca-95ca7093c415" path="/var/lib/kubelet/pods/725cb326-b867-449a-baca-95ca7093c415/volumes" Feb 17 18:06:17 crc kubenswrapper[4680]: I0217 18:06:17.897490 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"13bf0e76-39eb-44b7-876e-76e06523f968","Type":"ContainerStarted","Data":"753787862b702567cff34f507ac38c563a204d5760808df2c1f4f19754acad01"} Feb 17 18:06:17 crc kubenswrapper[4680]: I0217 18:06:17.897795 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"13bf0e76-39eb-44b7-876e-76e06523f968","Type":"ContainerStarted","Data":"647aee25df3807c95c182e5e57c9ef706197e4df311a64798aac9c95c80dda9f"} Feb 17 18:06:17 crc kubenswrapper[4680]: I0217 18:06:17.899277 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e325136c-a30f-4f22-aecf-69319458b18d","Type":"ContainerStarted","Data":"05c570b7b4e1118c3f503eab8d912dc3cba3d01faf8224e8f637bd0e7a87ff61"} Feb 17 18:06:17 crc kubenswrapper[4680]: I0217 18:06:17.899333 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e325136c-a30f-4f22-aecf-69319458b18d","Type":"ContainerStarted","Data":"b01d9224afe54773a20f8cc56822b31aa63b8d1d603fd70b2b68d9dd366092d0"} Feb 17 18:06:17 crc kubenswrapper[4680]: I0217 18:06:17.899349 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e325136c-a30f-4f22-aecf-69319458b18d","Type":"ContainerStarted","Data":"0852ab391cca66f461df13714f2dfd559650cf7ba374850ca546a60836b20943"} Feb 17 18:06:17 crc kubenswrapper[4680]: I0217 18:06:17.953665 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9536457069999997 podStartE2EDuration="2.953645707s" podCreationTimestamp="2026-02-17 18:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:06:17.951943914 +0000 UTC m=+1307.502842613" watchObservedRunningTime="2026-02-17 18:06:17.953645707 +0000 UTC m=+1307.504544386" Feb 17 18:06:17 crc kubenswrapper[4680]: I0217 18:06:17.958578 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.958555111 podStartE2EDuration="2.958555111s" podCreationTimestamp="2026-02-17 18:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:06:17.927877577 +0000 UTC m=+1307.478776266" watchObservedRunningTime="2026-02-17 18:06:17.958555111 +0000 UTC m=+1307.509453800" Feb 17 18:06:21 crc kubenswrapper[4680]: I0217 18:06:21.378641 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 18:06:21 crc kubenswrapper[4680]: I0217 18:06:21.379994 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 18:06:21 crc kubenswrapper[4680]: I0217 18:06:21.424847 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 18:06:24 crc kubenswrapper[4680]: I0217 18:06:24.477179 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 18:06:24 crc kubenswrapper[4680]: I0217 18:06:24.477459 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 18:06:25 crc kubenswrapper[4680]: I0217 18:06:25.493656 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="884f4451-a521-4e0f-9fd8-5f4e9c094fb8" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 18:06:25 crc kubenswrapper[4680]: I0217 18:06:25.496812 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="884f4451-a521-4e0f-9fd8-5f4e9c094fb8" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 18:06:26 crc kubenswrapper[4680]: I0217 18:06:26.378831 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 18:06:26 crc kubenswrapper[4680]: I0217 18:06:26.379196 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 18:06:26 crc kubenswrapper[4680]: I0217 18:06:26.424424 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 18:06:26 crc kubenswrapper[4680]: I0217 18:06:26.457504 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 18:06:27 crc kubenswrapper[4680]: I0217 18:06:27.002062 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 18:06:27 crc kubenswrapper[4680]: I0217 18:06:27.394512 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e325136c-a30f-4f22-aecf-69319458b18d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 18:06:27 crc kubenswrapper[4680]: I0217 18:06:27.394512 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e325136c-a30f-4f22-aecf-69319458b18d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 18:06:34 crc kubenswrapper[4680]: I0217 18:06:34.487801 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 18:06:34 crc kubenswrapper[4680]: I0217 18:06:34.489466 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 18:06:34 crc kubenswrapper[4680]: I0217 18:06:34.491394 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 18:06:34 crc kubenswrapper[4680]: I0217 18:06:34.498787 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 18:06:35 crc kubenswrapper[4680]: I0217 18:06:35.032016 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 18:06:35 crc kubenswrapper[4680]: I0217 18:06:35.039032 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 18:06:35 crc kubenswrapper[4680]: I0217 18:06:35.589213 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:06:35 crc kubenswrapper[4680]: I0217 18:06:35.589302 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:06:35 crc kubenswrapper[4680]: I0217 18:06:35.589367 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 18:06:35 crc kubenswrapper[4680]: I0217 18:06:35.590168 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad1d4ed6f92a7e4627e28b3747e3d87e3433b7897d128c386001f1954f052c13"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 18:06:35 crc kubenswrapper[4680]: I0217 18:06:35.590220 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://ad1d4ed6f92a7e4627e28b3747e3d87e3433b7897d128c386001f1954f052c13" gracePeriod=600 Feb 17 18:06:36 crc kubenswrapper[4680]: I0217 18:06:36.041729 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="ad1d4ed6f92a7e4627e28b3747e3d87e3433b7897d128c386001f1954f052c13" exitCode=0 Feb 17 18:06:36 crc kubenswrapper[4680]: I0217 18:06:36.041796 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"ad1d4ed6f92a7e4627e28b3747e3d87e3433b7897d128c386001f1954f052c13"} Feb 17 18:06:36 crc kubenswrapper[4680]: I0217 18:06:36.042336 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1"} Feb 17 18:06:36 crc kubenswrapper[4680]: I0217 18:06:36.042427 4680 scope.go:117] "RemoveContainer" containerID="7648edfdb05be717c37ab1637f452b4f2bc8bd71444b3bbf2ede9558e7be04df" Feb 17 18:06:36 crc kubenswrapper[4680]: I0217 18:06:36.383799 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 18:06:36 crc kubenswrapper[4680]: I0217 18:06:36.383869 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 18:06:36 crc kubenswrapper[4680]: I0217 18:06:36.389481 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 18:06:36 crc kubenswrapper[4680]: I0217 18:06:36.391953 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 18:06:45 crc kubenswrapper[4680]: I0217 18:06:45.258244 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 18:06:46 crc kubenswrapper[4680]: I0217 18:06:46.075782 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 18:06:55 crc kubenswrapper[4680]: I0217 18:06:55.781426 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="8d9f1e89-7561-458e-9189-73106b757079" containerName="rabbitmq" containerID="cri-o://de7d8587516a3e472941c02d37a182987c7c0ee3823ff9f91bb7fe57d292b479" gracePeriod=51 Feb 17 18:06:55 crc kubenswrapper[4680]: I0217 18:06:55.914426 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="81eeaf28-d5ac-44e3-bad2-215fc89f4895" containerName="rabbitmq" containerID="cri-o://4b1591e94fbc9b31e89aaf9fe2f4c5c4b820531ab45c09e9bfa78284c4de6ac0" gracePeriod=50 Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.250729 4680 generic.go:334] "Generic (PLEG): container finished" podID="8d9f1e89-7561-458e-9189-73106b757079" containerID="de7d8587516a3e472941c02d37a182987c7c0ee3823ff9f91bb7fe57d292b479" exitCode=0 Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.250804 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d9f1e89-7561-458e-9189-73106b757079","Type":"ContainerDied","Data":"de7d8587516a3e472941c02d37a182987c7c0ee3823ff9f91bb7fe57d292b479"} Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.386924 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.523698 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d9f1e89-7561-458e-9189-73106b757079-pod-info\") pod \"8d9f1e89-7561-458e-9189-73106b757079\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.523768 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d9f1e89-7561-458e-9189-73106b757079-erlang-cookie-secret\") pod \"8d9f1e89-7561-458e-9189-73106b757079\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.523878 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-config-data\") pod \"8d9f1e89-7561-458e-9189-73106b757079\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.524628 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-tls\") pod \"8d9f1e89-7561-458e-9189-73106b757079\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.524682 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-server-conf\") pod \"8d9f1e89-7561-458e-9189-73106b757079\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.524710 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-confd\") pod \"8d9f1e89-7561-458e-9189-73106b757079\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.524726 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"8d9f1e89-7561-458e-9189-73106b757079\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.524773 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-plugins-conf\") pod \"8d9f1e89-7561-458e-9189-73106b757079\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.524803 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-plugins\") pod \"8d9f1e89-7561-458e-9189-73106b757079\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.524846 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd6nq\" (UniqueName: \"kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-kube-api-access-kd6nq\") pod \"8d9f1e89-7561-458e-9189-73106b757079\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.524944 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-erlang-cookie\") pod \"8d9f1e89-7561-458e-9189-73106b757079\" (UID: \"8d9f1e89-7561-458e-9189-73106b757079\") " Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.526654 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8d9f1e89-7561-458e-9189-73106b757079" (UID: "8d9f1e89-7561-458e-9189-73106b757079"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.526976 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8d9f1e89-7561-458e-9189-73106b757079" (UID: "8d9f1e89-7561-458e-9189-73106b757079"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.527168 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8d9f1e89-7561-458e-9189-73106b757079" (UID: "8d9f1e89-7561-458e-9189-73106b757079"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.531703 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d9f1e89-7561-458e-9189-73106b757079-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8d9f1e89-7561-458e-9189-73106b757079" (UID: "8d9f1e89-7561-458e-9189-73106b757079"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.532015 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "8d9f1e89-7561-458e-9189-73106b757079" (UID: "8d9f1e89-7561-458e-9189-73106b757079"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.534976 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8d9f1e89-7561-458e-9189-73106b757079-pod-info" (OuterVolumeSpecName: "pod-info") pod "8d9f1e89-7561-458e-9189-73106b757079" (UID: "8d9f1e89-7561-458e-9189-73106b757079"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.536496 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-kube-api-access-kd6nq" (OuterVolumeSpecName: "kube-api-access-kd6nq") pod "8d9f1e89-7561-458e-9189-73106b757079" (UID: "8d9f1e89-7561-458e-9189-73106b757079"). InnerVolumeSpecName "kube-api-access-kd6nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.538653 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8d9f1e89-7561-458e-9189-73106b757079" (UID: "8d9f1e89-7561-458e-9189-73106b757079"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.586280 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-config-data" (OuterVolumeSpecName: "config-data") pod "8d9f1e89-7561-458e-9189-73106b757079" (UID: "8d9f1e89-7561-458e-9189-73106b757079"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.591551 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-server-conf" (OuterVolumeSpecName: "server-conf") pod "8d9f1e89-7561-458e-9189-73106b757079" (UID: "8d9f1e89-7561-458e-9189-73106b757079"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.626215 4680 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d9f1e89-7561-458e-9189-73106b757079-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.626257 4680 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d9f1e89-7561-458e-9189-73106b757079-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.626270 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.626280 4680 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.626291 4680 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.626335 4680 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.626349 4680 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d9f1e89-7561-458e-9189-73106b757079-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.626360 4680 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.626375 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd6nq\" (UniqueName: \"kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-kube-api-access-kd6nq\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.626386 4680 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.653625 4680 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.679790 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8d9f1e89-7561-458e-9189-73106b757079" (UID: "8d9f1e89-7561-458e-9189-73106b757079"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.727699 4680 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:57 crc kubenswrapper[4680]: I0217 18:06:57.727760 4680 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d9f1e89-7561-458e-9189-73106b757079-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.091001 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-65gq7"] Feb 17 18:06:58 crc kubenswrapper[4680]: E0217 18:06:58.091432 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9f1e89-7561-458e-9189-73106b757079" containerName="rabbitmq" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.091448 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9f1e89-7561-458e-9189-73106b757079" containerName="rabbitmq" Feb 17 18:06:58 crc kubenswrapper[4680]: E0217 18:06:58.091476 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9f1e89-7561-458e-9189-73106b757079" containerName="setup-container" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.091485 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9f1e89-7561-458e-9189-73106b757079" containerName="setup-container" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.091684 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9f1e89-7561-458e-9189-73106b757079" containerName="rabbitmq" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.092918 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.097564 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.104860 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-65gq7"] Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.251523 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.251568 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn7z5\" (UniqueName: \"kubernetes.io/projected/59054748-4087-4b87-85ad-ca3d680c4fa2-kube-api-access-sn7z5\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.251813 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.251888 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-config\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.252141 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.252208 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.252232 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.269866 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d9f1e89-7561-458e-9189-73106b757079","Type":"ContainerDied","Data":"5ed5723d3c36f4286b9ee118d73b7290f00c2fcef9da608837a6f6252c42625d"} Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.269923 4680 scope.go:117] "RemoveContainer" containerID="de7d8587516a3e472941c02d37a182987c7c0ee3823ff9f91bb7fe57d292b479" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.269958 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.315089 4680 scope.go:117] "RemoveContainer" containerID="bbec7f40f1f0a58d80579676478398525fbc7efbaaf6ef30a0bd41d67e9482c8" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.316440 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.331078 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.355524 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.355587 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.355626 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.355648 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn7z5\" (UniqueName: \"kubernetes.io/projected/59054748-4087-4b87-85ad-ca3d680c4fa2-kube-api-access-sn7z5\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.355769 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.355847 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-config\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.355960 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.356565 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.356573 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.358354 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.358538 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-config\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.358920 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.359119 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.366532 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.368729 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.370524 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.370742 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.371543 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.371717 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7lfbg" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.372593 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.375132 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.375535 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.380718 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn7z5\" (UniqueName: \"kubernetes.io/projected/59054748-4087-4b87-85ad-ca3d680c4fa2-kube-api-access-sn7z5\") pod \"dnsmasq-dns-79bd4cc8c9-65gq7\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.402852 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.419199 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.567567 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.567855 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.567902 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.567999 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.568030 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.568086 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpgfb\" (UniqueName: \"kubernetes.io/projected/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-kube-api-access-lpgfb\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.568145 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.568177 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.568211 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.568363 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.568461 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.670549 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpgfb\" (UniqueName: \"kubernetes.io/projected/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-kube-api-access-lpgfb\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.670635 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.670670 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.670709 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.670777 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.670848 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.671076 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.671111 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.671147 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.671207 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.671234 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.672259 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.672663 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.673451 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.674821 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.674943 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.675110 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.678847 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.679934 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.680069 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.680781 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.698669 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpgfb\" (UniqueName: \"kubernetes.io/projected/8ddff1fb-6281-40d0-82dd-7f395cd9d3dd-kube-api-access-lpgfb\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.709376 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.855983 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:06:58 crc kubenswrapper[4680]: I0217 18:06:58.971939 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-65gq7"] Feb 17 18:06:59 crc kubenswrapper[4680]: I0217 18:06:59.132642 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d9f1e89-7561-458e-9189-73106b757079" path="/var/lib/kubelet/pods/8d9f1e89-7561-458e-9189-73106b757079/volumes" Feb 17 18:06:59 crc kubenswrapper[4680]: I0217 18:06:59.290501 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" event={"ID":"59054748-4087-4b87-85ad-ca3d680c4fa2","Type":"ContainerStarted","Data":"05f33a730c82ba2aadbb0b26d9a179e0c9f169ab11e197e64b13ee3fff187dcf"} Feb 17 18:06:59 crc kubenswrapper[4680]: I0217 18:06:59.481812 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 18:06:59 crc kubenswrapper[4680]: W0217 18:06:59.510596 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ddff1fb_6281_40d0_82dd_7f395cd9d3dd.slice/crio-fa957078aeb3496c3abd3b48d180a0dd78d49a6dc3a820a8620704af9f54d7a9 WatchSource:0}: Error finding container fa957078aeb3496c3abd3b48d180a0dd78d49a6dc3a820a8620704af9f54d7a9: Status 404 returned error can't find the container with id fa957078aeb3496c3abd3b48d180a0dd78d49a6dc3a820a8620704af9f54d7a9 Feb 17 18:07:00 crc kubenswrapper[4680]: I0217 18:07:00.300811 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd","Type":"ContainerStarted","Data":"fa957078aeb3496c3abd3b48d180a0dd78d49a6dc3a820a8620704af9f54d7a9"} Feb 17 18:07:00 crc kubenswrapper[4680]: I0217 18:07:00.303211 4680 generic.go:334] "Generic (PLEG): container finished" podID="59054748-4087-4b87-85ad-ca3d680c4fa2" containerID="e4b9ce719834371df46b902b4450b26c5d26df2365c01637f2d138fbbfe0ac3d" exitCode=0 Feb 17 18:07:00 crc kubenswrapper[4680]: I0217 18:07:00.303273 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" event={"ID":"59054748-4087-4b87-85ad-ca3d680c4fa2","Type":"ContainerDied","Data":"e4b9ce719834371df46b902b4450b26c5d26df2365c01637f2d138fbbfe0ac3d"} Feb 17 18:07:00 crc kubenswrapper[4680]: I0217 18:07:00.820106 4680 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="81eeaf28-d5ac-44e3-bad2-215fc89f4895" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Feb 17 18:07:01 crc kubenswrapper[4680]: I0217 18:07:01.312945 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd","Type":"ContainerStarted","Data":"60138c37b1b00e416ce07dae2d6cb2c31128f6b9d7d7244b0e6f27a5c3b37fac"} Feb 17 18:07:01 crc kubenswrapper[4680]: I0217 18:07:01.317395 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" event={"ID":"59054748-4087-4b87-85ad-ca3d680c4fa2","Type":"ContainerStarted","Data":"91facdc44387a65dcf29e667a062812504fbfdafa8c48e06e491ef94d63b5540"} Feb 17 18:07:01 crc kubenswrapper[4680]: I0217 18:07:01.317597 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:07:01 crc kubenswrapper[4680]: I0217 18:07:01.378842 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" podStartSLOduration=3.378820673 podStartE2EDuration="3.378820673s" podCreationTimestamp="2026-02-17 18:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:07:01.376020155 +0000 UTC m=+1350.926918854" watchObservedRunningTime="2026-02-17 18:07:01.378820673 +0000 UTC m=+1350.929719372" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.337803 4680 generic.go:334] "Generic (PLEG): container finished" podID="81eeaf28-d5ac-44e3-bad2-215fc89f4895" containerID="4b1591e94fbc9b31e89aaf9fe2f4c5c4b820531ab45c09e9bfa78284c4de6ac0" exitCode=0 Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.338522 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"81eeaf28-d5ac-44e3-bad2-215fc89f4895","Type":"ContainerDied","Data":"4b1591e94fbc9b31e89aaf9fe2f4c5c4b820531ab45c09e9bfa78284c4de6ac0"} Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.491708 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.575735 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjcvg\" (UniqueName: \"kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-kube-api-access-vjcvg\") pod \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.575857 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81eeaf28-d5ac-44e3-bad2-215fc89f4895-pod-info\") pod \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.575916 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-plugins-conf\") pod \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.575952 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-plugins\") pod \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.576037 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-confd\") pod \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.576092 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-config-data\") pod \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.576139 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-tls\") pod \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.576195 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-server-conf\") pod \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.576228 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-erlang-cookie\") pod \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.576268 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81eeaf28-d5ac-44e3-bad2-215fc89f4895-erlang-cookie-secret\") pod \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.576292 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\" (UID: \"81eeaf28-d5ac-44e3-bad2-215fc89f4895\") " Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.576566 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "81eeaf28-d5ac-44e3-bad2-215fc89f4895" (UID: "81eeaf28-d5ac-44e3-bad2-215fc89f4895"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.576840 4680 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.577624 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "81eeaf28-d5ac-44e3-bad2-215fc89f4895" (UID: "81eeaf28-d5ac-44e3-bad2-215fc89f4895"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.579197 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "81eeaf28-d5ac-44e3-bad2-215fc89f4895" (UID: "81eeaf28-d5ac-44e3-bad2-215fc89f4895"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.592633 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-kube-api-access-vjcvg" (OuterVolumeSpecName: "kube-api-access-vjcvg") pod "81eeaf28-d5ac-44e3-bad2-215fc89f4895" (UID: "81eeaf28-d5ac-44e3-bad2-215fc89f4895"). InnerVolumeSpecName "kube-api-access-vjcvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.592740 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "81eeaf28-d5ac-44e3-bad2-215fc89f4895" (UID: "81eeaf28-d5ac-44e3-bad2-215fc89f4895"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.599279 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/81eeaf28-d5ac-44e3-bad2-215fc89f4895-pod-info" (OuterVolumeSpecName: "pod-info") pod "81eeaf28-d5ac-44e3-bad2-215fc89f4895" (UID: "81eeaf28-d5ac-44e3-bad2-215fc89f4895"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.600637 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "81eeaf28-d5ac-44e3-bad2-215fc89f4895" (UID: "81eeaf28-d5ac-44e3-bad2-215fc89f4895"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.610097 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81eeaf28-d5ac-44e3-bad2-215fc89f4895-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "81eeaf28-d5ac-44e3-bad2-215fc89f4895" (UID: "81eeaf28-d5ac-44e3-bad2-215fc89f4895"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.639880 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-config-data" (OuterVolumeSpecName: "config-data") pod "81eeaf28-d5ac-44e3-bad2-215fc89f4895" (UID: "81eeaf28-d5ac-44e3-bad2-215fc89f4895"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.683306 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.683366 4680 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.683380 4680 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.683392 4680 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81eeaf28-d5ac-44e3-bad2-215fc89f4895-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.683423 4680 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.683432 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjcvg\" (UniqueName: \"kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-kube-api-access-vjcvg\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.683441 4680 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81eeaf28-d5ac-44e3-bad2-215fc89f4895-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.683450 4680 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.706946 4680 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.785935 4680 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.793016 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-server-conf" (OuterVolumeSpecName: "server-conf") pod "81eeaf28-d5ac-44e3-bad2-215fc89f4895" (UID: "81eeaf28-d5ac-44e3-bad2-215fc89f4895"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.836553 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "81eeaf28-d5ac-44e3-bad2-215fc89f4895" (UID: "81eeaf28-d5ac-44e3-bad2-215fc89f4895"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.887829 4680 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81eeaf28-d5ac-44e3-bad2-215fc89f4895-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:02 crc kubenswrapper[4680]: I0217 18:07:02.887866 4680 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81eeaf28-d5ac-44e3-bad2-215fc89f4895-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.351673 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"81eeaf28-d5ac-44e3-bad2-215fc89f4895","Type":"ContainerDied","Data":"f0359a65c84df44330a51a7f1c8d1ac418f4b5e09a142d8a17e25a4856b26268"} Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.351733 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.351766 4680 scope.go:117] "RemoveContainer" containerID="4b1591e94fbc9b31e89aaf9fe2f4c5c4b820531ab45c09e9bfa78284c4de6ac0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.379759 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.384845 4680 scope.go:117] "RemoveContainer" containerID="a92f4b4af5c0b60983ad5e0d0a6a5a254d4b153918b5ee06eb42a9e9354726b2" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.394786 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.424463 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 18:07:03 crc kubenswrapper[4680]: E0217 18:07:03.425079 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81eeaf28-d5ac-44e3-bad2-215fc89f4895" containerName="rabbitmq" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.425102 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="81eeaf28-d5ac-44e3-bad2-215fc89f4895" containerName="rabbitmq" Feb 17 18:07:03 crc kubenswrapper[4680]: E0217 18:07:03.425135 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81eeaf28-d5ac-44e3-bad2-215fc89f4895" containerName="setup-container" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.425143 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="81eeaf28-d5ac-44e3-bad2-215fc89f4895" containerName="setup-container" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.425375 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="81eeaf28-d5ac-44e3-bad2-215fc89f4895" containerName="rabbitmq" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.426657 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.429846 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.430549 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.430764 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.430917 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.430944 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.431573 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-lz7x5" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.433149 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.450628 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.499546 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f5b94b27-be97-48aa-a421-49c29363f388-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.499866 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrc54\" (UniqueName: \"kubernetes.io/projected/f5b94b27-be97-48aa-a421-49c29363f388-kube-api-access-zrc54\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.500003 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f5b94b27-be97-48aa-a421-49c29363f388-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.500144 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f5b94b27-be97-48aa-a421-49c29363f388-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.500287 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f5b94b27-be97-48aa-a421-49c29363f388-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.500472 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f5b94b27-be97-48aa-a421-49c29363f388-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.500615 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f5b94b27-be97-48aa-a421-49c29363f388-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.500735 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.500840 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f5b94b27-be97-48aa-a421-49c29363f388-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.500948 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5b94b27-be97-48aa-a421-49c29363f388-config-data\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.501125 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f5b94b27-be97-48aa-a421-49c29363f388-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.603431 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f5b94b27-be97-48aa-a421-49c29363f388-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.603522 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f5b94b27-be97-48aa-a421-49c29363f388-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.603574 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f5b94b27-be97-48aa-a421-49c29363f388-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.603611 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.603638 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f5b94b27-be97-48aa-a421-49c29363f388-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.603663 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5b94b27-be97-48aa-a421-49c29363f388-config-data\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.603732 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f5b94b27-be97-48aa-a421-49c29363f388-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.603769 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f5b94b27-be97-48aa-a421-49c29363f388-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.603808 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrc54\" (UniqueName: \"kubernetes.io/projected/f5b94b27-be97-48aa-a421-49c29363f388-kube-api-access-zrc54\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.603857 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f5b94b27-be97-48aa-a421-49c29363f388-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.603910 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f5b94b27-be97-48aa-a421-49c29363f388-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.604443 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f5b94b27-be97-48aa-a421-49c29363f388-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.605245 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.605378 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f5b94b27-be97-48aa-a421-49c29363f388-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.605727 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f5b94b27-be97-48aa-a421-49c29363f388-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.605907 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5b94b27-be97-48aa-a421-49c29363f388-config-data\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.606460 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f5b94b27-be97-48aa-a421-49c29363f388-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.611555 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f5b94b27-be97-48aa-a421-49c29363f388-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.611779 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f5b94b27-be97-48aa-a421-49c29363f388-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.612574 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f5b94b27-be97-48aa-a421-49c29363f388-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.623263 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f5b94b27-be97-48aa-a421-49c29363f388-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.625345 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrc54\" (UniqueName: \"kubernetes.io/projected/f5b94b27-be97-48aa-a421-49c29363f388-kube-api-access-zrc54\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.656819 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f5b94b27-be97-48aa-a421-49c29363f388\") " pod="openstack/rabbitmq-server-0" Feb 17 18:07:03 crc kubenswrapper[4680]: I0217 18:07:03.767359 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 18:07:04 crc kubenswrapper[4680]: I0217 18:07:04.306028 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 18:07:04 crc kubenswrapper[4680]: I0217 18:07:04.364607 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f5b94b27-be97-48aa-a421-49c29363f388","Type":"ContainerStarted","Data":"2811e4c44fb53aade15d2f44749368db7f75f8caef6c3de92657de9428776841"} Feb 17 18:07:05 crc kubenswrapper[4680]: I0217 18:07:05.119005 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81eeaf28-d5ac-44e3-bad2-215fc89f4895" path="/var/lib/kubelet/pods/81eeaf28-d5ac-44e3-bad2-215fc89f4895/volumes" Feb 17 18:07:06 crc kubenswrapper[4680]: I0217 18:07:06.389119 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f5b94b27-be97-48aa-a421-49c29363f388","Type":"ContainerStarted","Data":"ddb3f1d2c213b33ea98802a6e2b47725d837538384372572792f80bb85b04868"} Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.423364 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.499148 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-m4hqb"] Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.499432 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" podUID="250ac6e5-9c00-4f78-8833-9ffbd8e24549" containerName="dnsmasq-dns" containerID="cri-o://92ee09ea902edffb5451e9f5f4819c500d3a8c6500b456ab1fbfa7f7c3b0ac95" gracePeriod=10 Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.725296 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-hwrrl"] Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.726845 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.738252 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-hwrrl"] Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.812640 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.812735 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.812770 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncczf\" (UniqueName: \"kubernetes.io/projected/44cbaec4-7f81-44c8-80f7-02818330e08f-kube-api-access-ncczf\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.812807 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.812826 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.812867 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-config\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.812952 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.915092 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-config\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.916049 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-config\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.916543 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.916624 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.916730 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.916773 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncczf\" (UniqueName: \"kubernetes.io/projected/44cbaec4-7f81-44c8-80f7-02818330e08f-kube-api-access-ncczf\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.916830 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.916857 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.917943 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.918494 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-dns-swift-storage-0\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.918578 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-dns-svc\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.919187 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-openstack-edpm-ipam\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.919808 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44cbaec4-7f81-44c8-80f7-02818330e08f-ovsdbserver-nb\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:08 crc kubenswrapper[4680]: I0217 18:07:08.942756 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncczf\" (UniqueName: \"kubernetes.io/projected/44cbaec4-7f81-44c8-80f7-02818330e08f-kube-api-access-ncczf\") pod \"dnsmasq-dns-6ff66b85ff-hwrrl\" (UID: \"44cbaec4-7f81-44c8-80f7-02818330e08f\") " pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.052344 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.192392 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.325797 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-dns-svc\") pod \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.325836 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbqmg\" (UniqueName: \"kubernetes.io/projected/250ac6e5-9c00-4f78-8833-9ffbd8e24549-kube-api-access-kbqmg\") pod \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.325902 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-ovsdbserver-sb\") pod \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.326008 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-config\") pod \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.326059 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-ovsdbserver-nb\") pod \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.326103 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-dns-swift-storage-0\") pod \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\" (UID: \"250ac6e5-9c00-4f78-8833-9ffbd8e24549\") " Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.333309 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250ac6e5-9c00-4f78-8833-9ffbd8e24549-kube-api-access-kbqmg" (OuterVolumeSpecName: "kube-api-access-kbqmg") pod "250ac6e5-9c00-4f78-8833-9ffbd8e24549" (UID: "250ac6e5-9c00-4f78-8833-9ffbd8e24549"). InnerVolumeSpecName "kube-api-access-kbqmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.441276 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbqmg\" (UniqueName: \"kubernetes.io/projected/250ac6e5-9c00-4f78-8833-9ffbd8e24549-kube-api-access-kbqmg\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.450192 4680 generic.go:334] "Generic (PLEG): container finished" podID="250ac6e5-9c00-4f78-8833-9ffbd8e24549" containerID="92ee09ea902edffb5451e9f5f4819c500d3a8c6500b456ab1fbfa7f7c3b0ac95" exitCode=0 Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.450250 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" event={"ID":"250ac6e5-9c00-4f78-8833-9ffbd8e24549","Type":"ContainerDied","Data":"92ee09ea902edffb5451e9f5f4819c500d3a8c6500b456ab1fbfa7f7c3b0ac95"} Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.450280 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" event={"ID":"250ac6e5-9c00-4f78-8833-9ffbd8e24549","Type":"ContainerDied","Data":"55f3576d9a137732f361732ca337cd23e8a77665307a110cb4a568e65b42158a"} Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.450299 4680 scope.go:117] "RemoveContainer" containerID="92ee09ea902edffb5451e9f5f4819c500d3a8c6500b456ab1fbfa7f7c3b0ac95" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.450426 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-m4hqb" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.452267 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-config" (OuterVolumeSpecName: "config") pod "250ac6e5-9c00-4f78-8833-9ffbd8e24549" (UID: "250ac6e5-9c00-4f78-8833-9ffbd8e24549"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.464445 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "250ac6e5-9c00-4f78-8833-9ffbd8e24549" (UID: "250ac6e5-9c00-4f78-8833-9ffbd8e24549"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.479632 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "250ac6e5-9c00-4f78-8833-9ffbd8e24549" (UID: "250ac6e5-9c00-4f78-8833-9ffbd8e24549"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.481454 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "250ac6e5-9c00-4f78-8833-9ffbd8e24549" (UID: "250ac6e5-9c00-4f78-8833-9ffbd8e24549"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.485173 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "250ac6e5-9c00-4f78-8833-9ffbd8e24549" (UID: "250ac6e5-9c00-4f78-8833-9ffbd8e24549"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.488395 4680 scope.go:117] "RemoveContainer" containerID="5084eda980bdcbe5faac73050e24bb80fe4b31fdf1567b40763cc17f139d3fef" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.515610 4680 scope.go:117] "RemoveContainer" containerID="92ee09ea902edffb5451e9f5f4819c500d3a8c6500b456ab1fbfa7f7c3b0ac95" Feb 17 18:07:09 crc kubenswrapper[4680]: E0217 18:07:09.516300 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92ee09ea902edffb5451e9f5f4819c500d3a8c6500b456ab1fbfa7f7c3b0ac95\": container with ID starting with 92ee09ea902edffb5451e9f5f4819c500d3a8c6500b456ab1fbfa7f7c3b0ac95 not found: ID does not exist" containerID="92ee09ea902edffb5451e9f5f4819c500d3a8c6500b456ab1fbfa7f7c3b0ac95" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.516434 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92ee09ea902edffb5451e9f5f4819c500d3a8c6500b456ab1fbfa7f7c3b0ac95"} err="failed to get container status \"92ee09ea902edffb5451e9f5f4819c500d3a8c6500b456ab1fbfa7f7c3b0ac95\": rpc error: code = NotFound desc = could not find container \"92ee09ea902edffb5451e9f5f4819c500d3a8c6500b456ab1fbfa7f7c3b0ac95\": container with ID starting with 92ee09ea902edffb5451e9f5f4819c500d3a8c6500b456ab1fbfa7f7c3b0ac95 not found: ID does not exist" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.516482 4680 scope.go:117] "RemoveContainer" containerID="5084eda980bdcbe5faac73050e24bb80fe4b31fdf1567b40763cc17f139d3fef" Feb 17 18:07:09 crc kubenswrapper[4680]: E0217 18:07:09.521433 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5084eda980bdcbe5faac73050e24bb80fe4b31fdf1567b40763cc17f139d3fef\": container with ID starting with 5084eda980bdcbe5faac73050e24bb80fe4b31fdf1567b40763cc17f139d3fef not found: ID does not exist" containerID="5084eda980bdcbe5faac73050e24bb80fe4b31fdf1567b40763cc17f139d3fef" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.521481 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5084eda980bdcbe5faac73050e24bb80fe4b31fdf1567b40763cc17f139d3fef"} err="failed to get container status \"5084eda980bdcbe5faac73050e24bb80fe4b31fdf1567b40763cc17f139d3fef\": rpc error: code = NotFound desc = could not find container \"5084eda980bdcbe5faac73050e24bb80fe4b31fdf1567b40763cc17f139d3fef\": container with ID starting with 5084eda980bdcbe5faac73050e24bb80fe4b31fdf1567b40763cc17f139d3fef not found: ID does not exist" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.542647 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.542723 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.542736 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.542750 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.542762 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/250ac6e5-9c00-4f78-8833-9ffbd8e24549-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.637679 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff66b85ff-hwrrl"] Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.815624 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-m4hqb"] Feb 17 18:07:09 crc kubenswrapper[4680]: I0217 18:07:09.823624 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-m4hqb"] Feb 17 18:07:10 crc kubenswrapper[4680]: I0217 18:07:10.462460 4680 generic.go:334] "Generic (PLEG): container finished" podID="44cbaec4-7f81-44c8-80f7-02818330e08f" containerID="5121686a91d36684f1beba191855f658907ea1173822539d142d567a862f5c8f" exitCode=0 Feb 17 18:07:10 crc kubenswrapper[4680]: I0217 18:07:10.462497 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" event={"ID":"44cbaec4-7f81-44c8-80f7-02818330e08f","Type":"ContainerDied","Data":"5121686a91d36684f1beba191855f658907ea1173822539d142d567a862f5c8f"} Feb 17 18:07:10 crc kubenswrapper[4680]: I0217 18:07:10.462517 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" event={"ID":"44cbaec4-7f81-44c8-80f7-02818330e08f","Type":"ContainerStarted","Data":"3a44aa37ebd080ee7811c83ad31255efcb1970ccc58172711aa8958a87787f39"} Feb 17 18:07:11 crc kubenswrapper[4680]: I0217 18:07:11.115749 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="250ac6e5-9c00-4f78-8833-9ffbd8e24549" path="/var/lib/kubelet/pods/250ac6e5-9c00-4f78-8833-9ffbd8e24549/volumes" Feb 17 18:07:11 crc kubenswrapper[4680]: I0217 18:07:11.472070 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" event={"ID":"44cbaec4-7f81-44c8-80f7-02818330e08f","Type":"ContainerStarted","Data":"73d5dc29c5ddc7b116c1497af6f96c6a8983001d95ea928b9f8dff889022cff5"} Feb 17 18:07:11 crc kubenswrapper[4680]: I0217 18:07:11.472291 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:11 crc kubenswrapper[4680]: I0217 18:07:11.491904 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" podStartSLOduration=3.491879867 podStartE2EDuration="3.491879867s" podCreationTimestamp="2026-02-17 18:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:07:11.489078739 +0000 UTC m=+1361.039977438" watchObservedRunningTime="2026-02-17 18:07:11.491879867 +0000 UTC m=+1361.042778546" Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.054604 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ff66b85ff-hwrrl" Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.128428 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-65gq7"] Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.129220 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" podUID="59054748-4087-4b87-85ad-ca3d680c4fa2" containerName="dnsmasq-dns" containerID="cri-o://91facdc44387a65dcf29e667a062812504fbfdafa8c48e06e491ef94d63b5540" gracePeriod=10 Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.586076 4680 generic.go:334] "Generic (PLEG): container finished" podID="59054748-4087-4b87-85ad-ca3d680c4fa2" containerID="91facdc44387a65dcf29e667a062812504fbfdafa8c48e06e491ef94d63b5540" exitCode=0 Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.586492 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" event={"ID":"59054748-4087-4b87-85ad-ca3d680c4fa2","Type":"ContainerDied","Data":"91facdc44387a65dcf29e667a062812504fbfdafa8c48e06e491ef94d63b5540"} Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.586521 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" event={"ID":"59054748-4087-4b87-85ad-ca3d680c4fa2","Type":"ContainerDied","Data":"05f33a730c82ba2aadbb0b26d9a179e0c9f169ab11e197e64b13ee3fff187dcf"} Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.586532 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05f33a730c82ba2aadbb0b26d9a179e0c9f169ab11e197e64b13ee3fff187dcf" Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.656782 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.765795 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-dns-swift-storage-0\") pod \"59054748-4087-4b87-85ad-ca3d680c4fa2\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.765920 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn7z5\" (UniqueName: \"kubernetes.io/projected/59054748-4087-4b87-85ad-ca3d680c4fa2-kube-api-access-sn7z5\") pod \"59054748-4087-4b87-85ad-ca3d680c4fa2\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.766011 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-openstack-edpm-ipam\") pod \"59054748-4087-4b87-85ad-ca3d680c4fa2\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.766235 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-ovsdbserver-sb\") pod \"59054748-4087-4b87-85ad-ca3d680c4fa2\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.766263 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-ovsdbserver-nb\") pod \"59054748-4087-4b87-85ad-ca3d680c4fa2\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.766381 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-dns-svc\") pod \"59054748-4087-4b87-85ad-ca3d680c4fa2\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.766513 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-config\") pod \"59054748-4087-4b87-85ad-ca3d680c4fa2\" (UID: \"59054748-4087-4b87-85ad-ca3d680c4fa2\") " Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.787971 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59054748-4087-4b87-85ad-ca3d680c4fa2-kube-api-access-sn7z5" (OuterVolumeSpecName: "kube-api-access-sn7z5") pod "59054748-4087-4b87-85ad-ca3d680c4fa2" (UID: "59054748-4087-4b87-85ad-ca3d680c4fa2"). InnerVolumeSpecName "kube-api-access-sn7z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.837195 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "59054748-4087-4b87-85ad-ca3d680c4fa2" (UID: "59054748-4087-4b87-85ad-ca3d680c4fa2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.843294 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "59054748-4087-4b87-85ad-ca3d680c4fa2" (UID: "59054748-4087-4b87-85ad-ca3d680c4fa2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.853753 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "59054748-4087-4b87-85ad-ca3d680c4fa2" (UID: "59054748-4087-4b87-85ad-ca3d680c4fa2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.861707 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "59054748-4087-4b87-85ad-ca3d680c4fa2" (UID: "59054748-4087-4b87-85ad-ca3d680c4fa2"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.868498 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.868532 4680 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.868546 4680 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.868555 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn7z5\" (UniqueName: \"kubernetes.io/projected/59054748-4087-4b87-85ad-ca3d680c4fa2-kube-api-access-sn7z5\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.868566 4680 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.868685 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "59054748-4087-4b87-85ad-ca3d680c4fa2" (UID: "59054748-4087-4b87-85ad-ca3d680c4fa2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.872086 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-config" (OuterVolumeSpecName: "config") pod "59054748-4087-4b87-85ad-ca3d680c4fa2" (UID: "59054748-4087-4b87-85ad-ca3d680c4fa2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.970728 4680 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:19 crc kubenswrapper[4680]: I0217 18:07:19.970782 4680 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59054748-4087-4b87-85ad-ca3d680c4fa2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 18:07:20 crc kubenswrapper[4680]: I0217 18:07:20.592977 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-65gq7" Feb 17 18:07:20 crc kubenswrapper[4680]: I0217 18:07:20.623288 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-65gq7"] Feb 17 18:07:20 crc kubenswrapper[4680]: I0217 18:07:20.633126 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-65gq7"] Feb 17 18:07:21 crc kubenswrapper[4680]: I0217 18:07:21.117941 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59054748-4087-4b87-85ad-ca3d680c4fa2" path="/var/lib/kubelet/pods/59054748-4087-4b87-85ad-ca3d680c4fa2/volumes" Feb 17 18:07:33 crc kubenswrapper[4680]: I0217 18:07:33.716417 4680 generic.go:334] "Generic (PLEG): container finished" podID="8ddff1fb-6281-40d0-82dd-7f395cd9d3dd" containerID="60138c37b1b00e416ce07dae2d6cb2c31128f6b9d7d7244b0e6f27a5c3b37fac" exitCode=0 Feb 17 18:07:33 crc kubenswrapper[4680]: I0217 18:07:33.716497 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd","Type":"ContainerDied","Data":"60138c37b1b00e416ce07dae2d6cb2c31128f6b9d7d7244b0e6f27a5c3b37fac"} Feb 17 18:07:34 crc kubenswrapper[4680]: I0217 18:07:34.727296 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8ddff1fb-6281-40d0-82dd-7f395cd9d3dd","Type":"ContainerStarted","Data":"325f84f03f28c8e76dd396f5083326b3b88e1dc4d6aac5f1af91fdf648660da6"} Feb 17 18:07:34 crc kubenswrapper[4680]: I0217 18:07:34.729270 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:07:34 crc kubenswrapper[4680]: I0217 18:07:34.754513 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.754499193 podStartE2EDuration="36.754499193s" podCreationTimestamp="2026-02-17 18:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:07:34.753304166 +0000 UTC m=+1384.304202845" watchObservedRunningTime="2026-02-17 18:07:34.754499193 +0000 UTC m=+1384.305397872" Feb 17 18:07:37 crc kubenswrapper[4680]: I0217 18:07:37.755731 4680 generic.go:334] "Generic (PLEG): container finished" podID="f5b94b27-be97-48aa-a421-49c29363f388" containerID="ddb3f1d2c213b33ea98802a6e2b47725d837538384372572792f80bb85b04868" exitCode=0 Feb 17 18:07:37 crc kubenswrapper[4680]: I0217 18:07:37.755781 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f5b94b27-be97-48aa-a421-49c29363f388","Type":"ContainerDied","Data":"ddb3f1d2c213b33ea98802a6e2b47725d837538384372572792f80bb85b04868"} Feb 17 18:07:38 crc kubenswrapper[4680]: I0217 18:07:38.768388 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f5b94b27-be97-48aa-a421-49c29363f388","Type":"ContainerStarted","Data":"3e8d8711875d1866a9de948e9b3b59c09dddaefc099b0d233456eb4f101e4f96"} Feb 17 18:07:38 crc kubenswrapper[4680]: I0217 18:07:38.769537 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 18:07:38 crc kubenswrapper[4680]: I0217 18:07:38.801181 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=35.80115476 podStartE2EDuration="35.80115476s" podCreationTimestamp="2026-02-17 18:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:07:38.792860429 +0000 UTC m=+1388.343759108" watchObservedRunningTime="2026-02-17 18:07:38.80115476 +0000 UTC m=+1388.352053439" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.173227 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8"] Feb 17 18:07:42 crc kubenswrapper[4680]: E0217 18:07:42.175013 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250ac6e5-9c00-4f78-8833-9ffbd8e24549" containerName="init" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.175028 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="250ac6e5-9c00-4f78-8833-9ffbd8e24549" containerName="init" Feb 17 18:07:42 crc kubenswrapper[4680]: E0217 18:07:42.175052 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59054748-4087-4b87-85ad-ca3d680c4fa2" containerName="init" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.175058 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="59054748-4087-4b87-85ad-ca3d680c4fa2" containerName="init" Feb 17 18:07:42 crc kubenswrapper[4680]: E0217 18:07:42.175070 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250ac6e5-9c00-4f78-8833-9ffbd8e24549" containerName="dnsmasq-dns" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.175076 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="250ac6e5-9c00-4f78-8833-9ffbd8e24549" containerName="dnsmasq-dns" Feb 17 18:07:42 crc kubenswrapper[4680]: E0217 18:07:42.175100 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59054748-4087-4b87-85ad-ca3d680c4fa2" containerName="dnsmasq-dns" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.175106 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="59054748-4087-4b87-85ad-ca3d680c4fa2" containerName="dnsmasq-dns" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.175281 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="250ac6e5-9c00-4f78-8833-9ffbd8e24549" containerName="dnsmasq-dns" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.175296 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="59054748-4087-4b87-85ad-ca3d680c4fa2" containerName="dnsmasq-dns" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.175922 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.178097 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mhdwq" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.178915 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.179124 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.179461 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.232752 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8"] Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.311193 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8\" (UID: \"b65551bb-7084-4453-8791-0a12d9e928e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.311264 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8\" (UID: \"b65551bb-7084-4453-8791-0a12d9e928e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.311331 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8\" (UID: \"b65551bb-7084-4453-8791-0a12d9e928e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.311424 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5kx7\" (UniqueName: \"kubernetes.io/projected/b65551bb-7084-4453-8791-0a12d9e928e4-kube-api-access-p5kx7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8\" (UID: \"b65551bb-7084-4453-8791-0a12d9e928e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.412463 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8\" (UID: \"b65551bb-7084-4453-8791-0a12d9e928e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.412512 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8\" (UID: \"b65551bb-7084-4453-8791-0a12d9e928e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.412565 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8\" (UID: \"b65551bb-7084-4453-8791-0a12d9e928e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.412652 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5kx7\" (UniqueName: \"kubernetes.io/projected/b65551bb-7084-4453-8791-0a12d9e928e4-kube-api-access-p5kx7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8\" (UID: \"b65551bb-7084-4453-8791-0a12d9e928e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.419131 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8\" (UID: \"b65551bb-7084-4453-8791-0a12d9e928e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.421289 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8\" (UID: \"b65551bb-7084-4453-8791-0a12d9e928e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.421945 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8\" (UID: \"b65551bb-7084-4453-8791-0a12d9e928e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.435911 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5kx7\" (UniqueName: \"kubernetes.io/projected/b65551bb-7084-4453-8791-0a12d9e928e4-kube-api-access-p5kx7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8\" (UID: \"b65551bb-7084-4453-8791-0a12d9e928e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" Feb 17 18:07:42 crc kubenswrapper[4680]: I0217 18:07:42.521477 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" Feb 17 18:07:43 crc kubenswrapper[4680]: I0217 18:07:43.361522 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8"] Feb 17 18:07:43 crc kubenswrapper[4680]: I0217 18:07:43.819607 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" event={"ID":"b65551bb-7084-4453-8791-0a12d9e928e4","Type":"ContainerStarted","Data":"c1fd56bca8245576ac0c436110c08de954b34c31bae18b15a38a2694cde163e6"} Feb 17 18:07:47 crc kubenswrapper[4680]: I0217 18:07:47.964799 4680 scope.go:117] "RemoveContainer" containerID="acacd5c6b4ae7f30a10d418918338f7071b41f9a3c126603192f50f06efed3e7" Feb 17 18:07:48 crc kubenswrapper[4680]: I0217 18:07:48.867507 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 18:07:52 crc kubenswrapper[4680]: I0217 18:07:52.662441 4680 scope.go:117] "RemoveContainer" containerID="e5a1e9e8d3ffe21cd9b6c1a53d1986f7eb2af21f038d5fac24501d29df104f01" Feb 17 18:07:53 crc kubenswrapper[4680]: I0217 18:07:53.772565 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 18:07:53 crc kubenswrapper[4680]: I0217 18:07:53.933693 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" event={"ID":"b65551bb-7084-4453-8791-0a12d9e928e4","Type":"ContainerStarted","Data":"7703108a6c73af63914620e5ca794d8e5815be45716f96add08c3e96ffb4f90a"} Feb 17 18:07:53 crc kubenswrapper[4680]: I0217 18:07:53.958825 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" podStartSLOduration=2.574251674 podStartE2EDuration="11.958802015s" podCreationTimestamp="2026-02-17 18:07:42 +0000 UTC" firstStartedPulling="2026-02-17 18:07:43.376055779 +0000 UTC m=+1392.926954458" lastFinishedPulling="2026-02-17 18:07:52.76060612 +0000 UTC m=+1402.311504799" observedRunningTime="2026-02-17 18:07:53.954210752 +0000 UTC m=+1403.505109451" watchObservedRunningTime="2026-02-17 18:07:53.958802015 +0000 UTC m=+1403.509700694" Feb 17 18:08:05 crc kubenswrapper[4680]: I0217 18:08:05.053472 4680 generic.go:334] "Generic (PLEG): container finished" podID="b65551bb-7084-4453-8791-0a12d9e928e4" containerID="7703108a6c73af63914620e5ca794d8e5815be45716f96add08c3e96ffb4f90a" exitCode=0 Feb 17 18:08:05 crc kubenswrapper[4680]: I0217 18:08:05.053518 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" event={"ID":"b65551bb-7084-4453-8791-0a12d9e928e4","Type":"ContainerDied","Data":"7703108a6c73af63914620e5ca794d8e5815be45716f96add08c3e96ffb4f90a"} Feb 17 18:08:06 crc kubenswrapper[4680]: I0217 18:08:06.470114 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" Feb 17 18:08:06 crc kubenswrapper[4680]: I0217 18:08:06.507846 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-repo-setup-combined-ca-bundle\") pod \"b65551bb-7084-4453-8791-0a12d9e928e4\" (UID: \"b65551bb-7084-4453-8791-0a12d9e928e4\") " Feb 17 18:08:06 crc kubenswrapper[4680]: I0217 18:08:06.507905 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5kx7\" (UniqueName: \"kubernetes.io/projected/b65551bb-7084-4453-8791-0a12d9e928e4-kube-api-access-p5kx7\") pod \"b65551bb-7084-4453-8791-0a12d9e928e4\" (UID: \"b65551bb-7084-4453-8791-0a12d9e928e4\") " Feb 17 18:08:06 crc kubenswrapper[4680]: I0217 18:08:06.507946 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-ssh-key-openstack-edpm-ipam\") pod \"b65551bb-7084-4453-8791-0a12d9e928e4\" (UID: \"b65551bb-7084-4453-8791-0a12d9e928e4\") " Feb 17 18:08:06 crc kubenswrapper[4680]: I0217 18:08:06.508019 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-inventory\") pod \"b65551bb-7084-4453-8791-0a12d9e928e4\" (UID: \"b65551bb-7084-4453-8791-0a12d9e928e4\") " Feb 17 18:08:06 crc kubenswrapper[4680]: I0217 18:08:06.514109 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b65551bb-7084-4453-8791-0a12d9e928e4" (UID: "b65551bb-7084-4453-8791-0a12d9e928e4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:08:06 crc kubenswrapper[4680]: I0217 18:08:06.526862 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b65551bb-7084-4453-8791-0a12d9e928e4-kube-api-access-p5kx7" (OuterVolumeSpecName: "kube-api-access-p5kx7") pod "b65551bb-7084-4453-8791-0a12d9e928e4" (UID: "b65551bb-7084-4453-8791-0a12d9e928e4"). InnerVolumeSpecName "kube-api-access-p5kx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:08:06 crc kubenswrapper[4680]: I0217 18:08:06.534507 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b65551bb-7084-4453-8791-0a12d9e928e4" (UID: "b65551bb-7084-4453-8791-0a12d9e928e4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:08:06 crc kubenswrapper[4680]: I0217 18:08:06.538780 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-inventory" (OuterVolumeSpecName: "inventory") pod "b65551bb-7084-4453-8791-0a12d9e928e4" (UID: "b65551bb-7084-4453-8791-0a12d9e928e4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:08:06 crc kubenswrapper[4680]: I0217 18:08:06.610672 4680 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:08:06 crc kubenswrapper[4680]: I0217 18:08:06.610706 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5kx7\" (UniqueName: \"kubernetes.io/projected/b65551bb-7084-4453-8791-0a12d9e928e4-kube-api-access-p5kx7\") on node \"crc\" DevicePath \"\"" Feb 17 18:08:06 crc kubenswrapper[4680]: I0217 18:08:06.610716 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:08:06 crc kubenswrapper[4680]: I0217 18:08:06.610724 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b65551bb-7084-4453-8791-0a12d9e928e4-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.071160 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" event={"ID":"b65551bb-7084-4453-8791-0a12d9e928e4","Type":"ContainerDied","Data":"c1fd56bca8245576ac0c436110c08de954b34c31bae18b15a38a2694cde163e6"} Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.071467 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1fd56bca8245576ac0c436110c08de954b34c31bae18b15a38a2694cde163e6" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.071384 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.160047 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p"] Feb 17 18:08:07 crc kubenswrapper[4680]: E0217 18:08:07.160704 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65551bb-7084-4453-8791-0a12d9e928e4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.160799 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65551bb-7084-4453-8791-0a12d9e928e4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.161119 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="b65551bb-7084-4453-8791-0a12d9e928e4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.162053 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.165789 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.165815 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.166272 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mhdwq" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.166598 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.173021 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p"] Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.221752 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4dbcac70-066d-4d71-8f2d-f8762e241236-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-trq4p\" (UID: \"4dbcac70-066d-4d71-8f2d-f8762e241236\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.221873 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ght9x\" (UniqueName: \"kubernetes.io/projected/4dbcac70-066d-4d71-8f2d-f8762e241236-kube-api-access-ght9x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-trq4p\" (UID: \"4dbcac70-066d-4d71-8f2d-f8762e241236\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.222029 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dbcac70-066d-4d71-8f2d-f8762e241236-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-trq4p\" (UID: \"4dbcac70-066d-4d71-8f2d-f8762e241236\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.325217 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4dbcac70-066d-4d71-8f2d-f8762e241236-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-trq4p\" (UID: \"4dbcac70-066d-4d71-8f2d-f8762e241236\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.325541 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ght9x\" (UniqueName: \"kubernetes.io/projected/4dbcac70-066d-4d71-8f2d-f8762e241236-kube-api-access-ght9x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-trq4p\" (UID: \"4dbcac70-066d-4d71-8f2d-f8762e241236\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.326747 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dbcac70-066d-4d71-8f2d-f8762e241236-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-trq4p\" (UID: \"4dbcac70-066d-4d71-8f2d-f8762e241236\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.330740 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dbcac70-066d-4d71-8f2d-f8762e241236-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-trq4p\" (UID: \"4dbcac70-066d-4d71-8f2d-f8762e241236\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.330822 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4dbcac70-066d-4d71-8f2d-f8762e241236-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-trq4p\" (UID: \"4dbcac70-066d-4d71-8f2d-f8762e241236\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.348517 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ght9x\" (UniqueName: \"kubernetes.io/projected/4dbcac70-066d-4d71-8f2d-f8762e241236-kube-api-access-ght9x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-trq4p\" (UID: \"4dbcac70-066d-4d71-8f2d-f8762e241236\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.479066 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" Feb 17 18:08:07 crc kubenswrapper[4680]: I0217 18:08:07.968493 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p"] Feb 17 18:08:08 crc kubenswrapper[4680]: I0217 18:08:08.081364 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" event={"ID":"4dbcac70-066d-4d71-8f2d-f8762e241236","Type":"ContainerStarted","Data":"3948a53c7ab20b4f9ed167972d379d79c86c4ea6efb716906aac0b7efe789e12"} Feb 17 18:08:09 crc kubenswrapper[4680]: I0217 18:08:09.092631 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" event={"ID":"4dbcac70-066d-4d71-8f2d-f8762e241236","Type":"ContainerStarted","Data":"79d15d718237fd56ed96950790416259022bbc86d60515c78378f290dc984b8a"} Feb 17 18:08:09 crc kubenswrapper[4680]: I0217 18:08:09.114097 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" podStartSLOduration=1.713136643 podStartE2EDuration="2.114075177s" podCreationTimestamp="2026-02-17 18:08:07 +0000 UTC" firstStartedPulling="2026-02-17 18:08:07.991269799 +0000 UTC m=+1417.542168478" lastFinishedPulling="2026-02-17 18:08:08.392208333 +0000 UTC m=+1417.943107012" observedRunningTime="2026-02-17 18:08:09.112776205 +0000 UTC m=+1418.663674884" watchObservedRunningTime="2026-02-17 18:08:09.114075177 +0000 UTC m=+1418.664973866" Feb 17 18:08:11 crc kubenswrapper[4680]: I0217 18:08:11.111936 4680 generic.go:334] "Generic (PLEG): container finished" podID="4dbcac70-066d-4d71-8f2d-f8762e241236" containerID="79d15d718237fd56ed96950790416259022bbc86d60515c78378f290dc984b8a" exitCode=0 Feb 17 18:08:11 crc kubenswrapper[4680]: I0217 18:08:11.118181 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" event={"ID":"4dbcac70-066d-4d71-8f2d-f8762e241236","Type":"ContainerDied","Data":"79d15d718237fd56ed96950790416259022bbc86d60515c78378f290dc984b8a"} Feb 17 18:08:12 crc kubenswrapper[4680]: I0217 18:08:12.502685 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" Feb 17 18:08:12 crc kubenswrapper[4680]: I0217 18:08:12.629367 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ght9x\" (UniqueName: \"kubernetes.io/projected/4dbcac70-066d-4d71-8f2d-f8762e241236-kube-api-access-ght9x\") pod \"4dbcac70-066d-4d71-8f2d-f8762e241236\" (UID: \"4dbcac70-066d-4d71-8f2d-f8762e241236\") " Feb 17 18:08:12 crc kubenswrapper[4680]: I0217 18:08:12.630386 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4dbcac70-066d-4d71-8f2d-f8762e241236-ssh-key-openstack-edpm-ipam\") pod \"4dbcac70-066d-4d71-8f2d-f8762e241236\" (UID: \"4dbcac70-066d-4d71-8f2d-f8762e241236\") " Feb 17 18:08:12 crc kubenswrapper[4680]: I0217 18:08:12.630495 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dbcac70-066d-4d71-8f2d-f8762e241236-inventory\") pod \"4dbcac70-066d-4d71-8f2d-f8762e241236\" (UID: \"4dbcac70-066d-4d71-8f2d-f8762e241236\") " Feb 17 18:08:12 crc kubenswrapper[4680]: I0217 18:08:12.638744 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dbcac70-066d-4d71-8f2d-f8762e241236-kube-api-access-ght9x" (OuterVolumeSpecName: "kube-api-access-ght9x") pod "4dbcac70-066d-4d71-8f2d-f8762e241236" (UID: "4dbcac70-066d-4d71-8f2d-f8762e241236"). InnerVolumeSpecName "kube-api-access-ght9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:08:12 crc kubenswrapper[4680]: I0217 18:08:12.657046 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dbcac70-066d-4d71-8f2d-f8762e241236-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4dbcac70-066d-4d71-8f2d-f8762e241236" (UID: "4dbcac70-066d-4d71-8f2d-f8762e241236"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:08:12 crc kubenswrapper[4680]: I0217 18:08:12.666517 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dbcac70-066d-4d71-8f2d-f8762e241236-inventory" (OuterVolumeSpecName: "inventory") pod "4dbcac70-066d-4d71-8f2d-f8762e241236" (UID: "4dbcac70-066d-4d71-8f2d-f8762e241236"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:08:12 crc kubenswrapper[4680]: I0217 18:08:12.732744 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dbcac70-066d-4d71-8f2d-f8762e241236-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 18:08:12 crc kubenswrapper[4680]: I0217 18:08:12.732780 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ght9x\" (UniqueName: \"kubernetes.io/projected/4dbcac70-066d-4d71-8f2d-f8762e241236-kube-api-access-ght9x\") on node \"crc\" DevicePath \"\"" Feb 17 18:08:12 crc kubenswrapper[4680]: I0217 18:08:12.732801 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4dbcac70-066d-4d71-8f2d-f8762e241236-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.131904 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" event={"ID":"4dbcac70-066d-4d71-8f2d-f8762e241236","Type":"ContainerDied","Data":"3948a53c7ab20b4f9ed167972d379d79c86c4ea6efb716906aac0b7efe789e12"} Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.132089 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3948a53c7ab20b4f9ed167972d379d79c86c4ea6efb716906aac0b7efe789e12" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.132041 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-trq4p" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.217774 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd"] Feb 17 18:08:13 crc kubenswrapper[4680]: E0217 18:08:13.218238 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbcac70-066d-4d71-8f2d-f8762e241236" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.218261 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbcac70-066d-4d71-8f2d-f8762e241236" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.218521 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dbcac70-066d-4d71-8f2d-f8762e241236" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.219269 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.222479 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.222658 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mhdwq" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.226265 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.226278 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.245052 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd"] Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.344177 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd\" (UID: \"756e6a23-4101-4886-bfe8-cc4f12799fd7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.344286 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd\" (UID: \"756e6a23-4101-4886-bfe8-cc4f12799fd7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.344376 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fxpr\" (UniqueName: \"kubernetes.io/projected/756e6a23-4101-4886-bfe8-cc4f12799fd7-kube-api-access-8fxpr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd\" (UID: \"756e6a23-4101-4886-bfe8-cc4f12799fd7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.344400 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd\" (UID: \"756e6a23-4101-4886-bfe8-cc4f12799fd7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.446425 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd\" (UID: \"756e6a23-4101-4886-bfe8-cc4f12799fd7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.446816 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd\" (UID: \"756e6a23-4101-4886-bfe8-cc4f12799fd7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.446910 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fxpr\" (UniqueName: \"kubernetes.io/projected/756e6a23-4101-4886-bfe8-cc4f12799fd7-kube-api-access-8fxpr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd\" (UID: \"756e6a23-4101-4886-bfe8-cc4f12799fd7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.447114 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd\" (UID: \"756e6a23-4101-4886-bfe8-cc4f12799fd7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.450515 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd\" (UID: \"756e6a23-4101-4886-bfe8-cc4f12799fd7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.450804 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd\" (UID: \"756e6a23-4101-4886-bfe8-cc4f12799fd7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.455875 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd\" (UID: \"756e6a23-4101-4886-bfe8-cc4f12799fd7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.469287 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fxpr\" (UniqueName: \"kubernetes.io/projected/756e6a23-4101-4886-bfe8-cc4f12799fd7-kube-api-access-8fxpr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd\" (UID: \"756e6a23-4101-4886-bfe8-cc4f12799fd7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" Feb 17 18:08:13 crc kubenswrapper[4680]: I0217 18:08:13.577649 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" Feb 17 18:08:14 crc kubenswrapper[4680]: I0217 18:08:14.107554 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd"] Feb 17 18:08:14 crc kubenswrapper[4680]: W0217 18:08:14.109362 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod756e6a23_4101_4886_bfe8_cc4f12799fd7.slice/crio-8a4ed59424c3cf11c56f8f1a8189be69a13567354a046c95b48aa58ebaed376a WatchSource:0}: Error finding container 8a4ed59424c3cf11c56f8f1a8189be69a13567354a046c95b48aa58ebaed376a: Status 404 returned error can't find the container with id 8a4ed59424c3cf11c56f8f1a8189be69a13567354a046c95b48aa58ebaed376a Feb 17 18:08:14 crc kubenswrapper[4680]: I0217 18:08:14.141067 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" event={"ID":"756e6a23-4101-4886-bfe8-cc4f12799fd7","Type":"ContainerStarted","Data":"8a4ed59424c3cf11c56f8f1a8189be69a13567354a046c95b48aa58ebaed376a"} Feb 17 18:08:15 crc kubenswrapper[4680]: I0217 18:08:15.151577 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" event={"ID":"756e6a23-4101-4886-bfe8-cc4f12799fd7","Type":"ContainerStarted","Data":"afd1465827bfabf8da23c9dc5e37b76dda67a1e25fb229dbef02a029e97e14d0"} Feb 17 18:08:15 crc kubenswrapper[4680]: I0217 18:08:15.182271 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" podStartSLOduration=1.766771988 podStartE2EDuration="2.182249319s" podCreationTimestamp="2026-02-17 18:08:13 +0000 UTC" firstStartedPulling="2026-02-17 18:08:14.113398288 +0000 UTC m=+1423.664296967" lastFinishedPulling="2026-02-17 18:08:14.528875609 +0000 UTC m=+1424.079774298" observedRunningTime="2026-02-17 18:08:15.169060606 +0000 UTC m=+1424.719959295" watchObservedRunningTime="2026-02-17 18:08:15.182249319 +0000 UTC m=+1424.733147998" Feb 17 18:08:33 crc kubenswrapper[4680]: I0217 18:08:33.877810 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xjwdc"] Feb 17 18:08:33 crc kubenswrapper[4680]: I0217 18:08:33.882022 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xjwdc" Feb 17 18:08:33 crc kubenswrapper[4680]: I0217 18:08:33.888085 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xjwdc"] Feb 17 18:08:33 crc kubenswrapper[4680]: I0217 18:08:33.965367 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43ce48bf-81a3-485d-9017-912197d133df-catalog-content\") pod \"community-operators-xjwdc\" (UID: \"43ce48bf-81a3-485d-9017-912197d133df\") " pod="openshift-marketplace/community-operators-xjwdc" Feb 17 18:08:33 crc kubenswrapper[4680]: I0217 18:08:33.965561 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43ce48bf-81a3-485d-9017-912197d133df-utilities\") pod \"community-operators-xjwdc\" (UID: \"43ce48bf-81a3-485d-9017-912197d133df\") " pod="openshift-marketplace/community-operators-xjwdc" Feb 17 18:08:33 crc kubenswrapper[4680]: I0217 18:08:33.965652 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmftp\" (UniqueName: \"kubernetes.io/projected/43ce48bf-81a3-485d-9017-912197d133df-kube-api-access-rmftp\") pod \"community-operators-xjwdc\" (UID: \"43ce48bf-81a3-485d-9017-912197d133df\") " pod="openshift-marketplace/community-operators-xjwdc" Feb 17 18:08:34 crc kubenswrapper[4680]: I0217 18:08:34.067843 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43ce48bf-81a3-485d-9017-912197d133df-utilities\") pod \"community-operators-xjwdc\" (UID: \"43ce48bf-81a3-485d-9017-912197d133df\") " pod="openshift-marketplace/community-operators-xjwdc" Feb 17 18:08:34 crc kubenswrapper[4680]: I0217 18:08:34.067960 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmftp\" (UniqueName: \"kubernetes.io/projected/43ce48bf-81a3-485d-9017-912197d133df-kube-api-access-rmftp\") pod \"community-operators-xjwdc\" (UID: \"43ce48bf-81a3-485d-9017-912197d133df\") " pod="openshift-marketplace/community-operators-xjwdc" Feb 17 18:08:34 crc kubenswrapper[4680]: I0217 18:08:34.068034 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43ce48bf-81a3-485d-9017-912197d133df-catalog-content\") pod \"community-operators-xjwdc\" (UID: \"43ce48bf-81a3-485d-9017-912197d133df\") " pod="openshift-marketplace/community-operators-xjwdc" Feb 17 18:08:34 crc kubenswrapper[4680]: I0217 18:08:34.068584 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43ce48bf-81a3-485d-9017-912197d133df-catalog-content\") pod \"community-operators-xjwdc\" (UID: \"43ce48bf-81a3-485d-9017-912197d133df\") " pod="openshift-marketplace/community-operators-xjwdc" Feb 17 18:08:34 crc kubenswrapper[4680]: I0217 18:08:34.068951 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43ce48bf-81a3-485d-9017-912197d133df-utilities\") pod \"community-operators-xjwdc\" (UID: \"43ce48bf-81a3-485d-9017-912197d133df\") " pod="openshift-marketplace/community-operators-xjwdc" Feb 17 18:08:34 crc kubenswrapper[4680]: I0217 18:08:34.090816 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmftp\" (UniqueName: \"kubernetes.io/projected/43ce48bf-81a3-485d-9017-912197d133df-kube-api-access-rmftp\") pod \"community-operators-xjwdc\" (UID: \"43ce48bf-81a3-485d-9017-912197d133df\") " pod="openshift-marketplace/community-operators-xjwdc" Feb 17 18:08:34 crc kubenswrapper[4680]: I0217 18:08:34.198862 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xjwdc" Feb 17 18:08:35 crc kubenswrapper[4680]: I0217 18:08:34.967393 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xjwdc"] Feb 17 18:08:35 crc kubenswrapper[4680]: I0217 18:08:35.334912 4680 generic.go:334] "Generic (PLEG): container finished" podID="43ce48bf-81a3-485d-9017-912197d133df" containerID="738bc32c5e0e354e894670a84eb1c99a2164229da2e015501a8c27a42153b40d" exitCode=0 Feb 17 18:08:35 crc kubenswrapper[4680]: I0217 18:08:35.334957 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjwdc" event={"ID":"43ce48bf-81a3-485d-9017-912197d133df","Type":"ContainerDied","Data":"738bc32c5e0e354e894670a84eb1c99a2164229da2e015501a8c27a42153b40d"} Feb 17 18:08:35 crc kubenswrapper[4680]: I0217 18:08:35.335216 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjwdc" event={"ID":"43ce48bf-81a3-485d-9017-912197d133df","Type":"ContainerStarted","Data":"3957c3fa60bff359f2e0cb9f19a93e2517e80ac2e23acd8e755881140fa85c75"} Feb 17 18:08:35 crc kubenswrapper[4680]: I0217 18:08:35.588370 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:08:35 crc kubenswrapper[4680]: I0217 18:08:35.588446 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:08:36 crc kubenswrapper[4680]: I0217 18:08:36.363961 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjwdc" event={"ID":"43ce48bf-81a3-485d-9017-912197d133df","Type":"ContainerStarted","Data":"cbcf32ae2607652cf2359203c40d31e5b8fe16118878726f1fde254c14431416"} Feb 17 18:08:38 crc kubenswrapper[4680]: I0217 18:08:38.383021 4680 generic.go:334] "Generic (PLEG): container finished" podID="43ce48bf-81a3-485d-9017-912197d133df" containerID="cbcf32ae2607652cf2359203c40d31e5b8fe16118878726f1fde254c14431416" exitCode=0 Feb 17 18:08:38 crc kubenswrapper[4680]: I0217 18:08:38.383080 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjwdc" event={"ID":"43ce48bf-81a3-485d-9017-912197d133df","Type":"ContainerDied","Data":"cbcf32ae2607652cf2359203c40d31e5b8fe16118878726f1fde254c14431416"} Feb 17 18:08:39 crc kubenswrapper[4680]: I0217 18:08:39.395172 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjwdc" event={"ID":"43ce48bf-81a3-485d-9017-912197d133df","Type":"ContainerStarted","Data":"ae292ee1e7d7e4fb0ff63bb805764e2222104fedc9db52c14390d969c7b1bd98"} Feb 17 18:08:39 crc kubenswrapper[4680]: I0217 18:08:39.413998 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xjwdc" podStartSLOduration=2.999671251 podStartE2EDuration="6.413956574s" podCreationTimestamp="2026-02-17 18:08:33 +0000 UTC" firstStartedPulling="2026-02-17 18:08:35.33839787 +0000 UTC m=+1444.889296549" lastFinishedPulling="2026-02-17 18:08:38.752683193 +0000 UTC m=+1448.303581872" observedRunningTime="2026-02-17 18:08:39.411193558 +0000 UTC m=+1448.962092257" watchObservedRunningTime="2026-02-17 18:08:39.413956574 +0000 UTC m=+1448.964855253" Feb 17 18:08:44 crc kubenswrapper[4680]: I0217 18:08:44.199524 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xjwdc" Feb 17 18:08:44 crc kubenswrapper[4680]: I0217 18:08:44.200047 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xjwdc" Feb 17 18:08:45 crc kubenswrapper[4680]: I0217 18:08:45.246140 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xjwdc" podUID="43ce48bf-81a3-485d-9017-912197d133df" containerName="registry-server" probeResult="failure" output=< Feb 17 18:08:45 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 18:08:45 crc kubenswrapper[4680]: > Feb 17 18:08:52 crc kubenswrapper[4680]: I0217 18:08:52.840999 4680 scope.go:117] "RemoveContainer" containerID="394cb7193dd074b518155626ecb88130290e11d93fd280dd23d4a7e3acbb5d92" Feb 17 18:08:54 crc kubenswrapper[4680]: I0217 18:08:54.257681 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xjwdc" Feb 17 18:08:54 crc kubenswrapper[4680]: I0217 18:08:54.303167 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xjwdc" Feb 17 18:08:54 crc kubenswrapper[4680]: I0217 18:08:54.502455 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xjwdc"] Feb 17 18:08:55 crc kubenswrapper[4680]: I0217 18:08:55.556384 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xjwdc" podUID="43ce48bf-81a3-485d-9017-912197d133df" containerName="registry-server" containerID="cri-o://ae292ee1e7d7e4fb0ff63bb805764e2222104fedc9db52c14390d969c7b1bd98" gracePeriod=2 Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.003405 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xjwdc" Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.146437 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmftp\" (UniqueName: \"kubernetes.io/projected/43ce48bf-81a3-485d-9017-912197d133df-kube-api-access-rmftp\") pod \"43ce48bf-81a3-485d-9017-912197d133df\" (UID: \"43ce48bf-81a3-485d-9017-912197d133df\") " Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.146861 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43ce48bf-81a3-485d-9017-912197d133df-utilities\") pod \"43ce48bf-81a3-485d-9017-912197d133df\" (UID: \"43ce48bf-81a3-485d-9017-912197d133df\") " Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.146991 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43ce48bf-81a3-485d-9017-912197d133df-catalog-content\") pod \"43ce48bf-81a3-485d-9017-912197d133df\" (UID: \"43ce48bf-81a3-485d-9017-912197d133df\") " Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.147909 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43ce48bf-81a3-485d-9017-912197d133df-utilities" (OuterVolumeSpecName: "utilities") pod "43ce48bf-81a3-485d-9017-912197d133df" (UID: "43ce48bf-81a3-485d-9017-912197d133df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.151681 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ce48bf-81a3-485d-9017-912197d133df-kube-api-access-rmftp" (OuterVolumeSpecName: "kube-api-access-rmftp") pod "43ce48bf-81a3-485d-9017-912197d133df" (UID: "43ce48bf-81a3-485d-9017-912197d133df"). InnerVolumeSpecName "kube-api-access-rmftp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.198480 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43ce48bf-81a3-485d-9017-912197d133df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43ce48bf-81a3-485d-9017-912197d133df" (UID: "43ce48bf-81a3-485d-9017-912197d133df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.249145 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmftp\" (UniqueName: \"kubernetes.io/projected/43ce48bf-81a3-485d-9017-912197d133df-kube-api-access-rmftp\") on node \"crc\" DevicePath \"\"" Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.249464 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43ce48bf-81a3-485d-9017-912197d133df-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.249477 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43ce48bf-81a3-485d-9017-912197d133df-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.568558 4680 generic.go:334] "Generic (PLEG): container finished" podID="43ce48bf-81a3-485d-9017-912197d133df" containerID="ae292ee1e7d7e4fb0ff63bb805764e2222104fedc9db52c14390d969c7b1bd98" exitCode=0 Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.568619 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjwdc" event={"ID":"43ce48bf-81a3-485d-9017-912197d133df","Type":"ContainerDied","Data":"ae292ee1e7d7e4fb0ff63bb805764e2222104fedc9db52c14390d969c7b1bd98"} Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.568747 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjwdc" event={"ID":"43ce48bf-81a3-485d-9017-912197d133df","Type":"ContainerDied","Data":"3957c3fa60bff359f2e0cb9f19a93e2517e80ac2e23acd8e755881140fa85c75"} Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.568770 4680 scope.go:117] "RemoveContainer" containerID="ae292ee1e7d7e4fb0ff63bb805764e2222104fedc9db52c14390d969c7b1bd98" Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.570219 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xjwdc" Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.598019 4680 scope.go:117] "RemoveContainer" containerID="cbcf32ae2607652cf2359203c40d31e5b8fe16118878726f1fde254c14431416" Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.614999 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xjwdc"] Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.624422 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xjwdc"] Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.633080 4680 scope.go:117] "RemoveContainer" containerID="738bc32c5e0e354e894670a84eb1c99a2164229da2e015501a8c27a42153b40d" Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.666077 4680 scope.go:117] "RemoveContainer" containerID="ae292ee1e7d7e4fb0ff63bb805764e2222104fedc9db52c14390d969c7b1bd98" Feb 17 18:08:56 crc kubenswrapper[4680]: E0217 18:08:56.666925 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae292ee1e7d7e4fb0ff63bb805764e2222104fedc9db52c14390d969c7b1bd98\": container with ID starting with ae292ee1e7d7e4fb0ff63bb805764e2222104fedc9db52c14390d969c7b1bd98 not found: ID does not exist" containerID="ae292ee1e7d7e4fb0ff63bb805764e2222104fedc9db52c14390d969c7b1bd98" Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.666962 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae292ee1e7d7e4fb0ff63bb805764e2222104fedc9db52c14390d969c7b1bd98"} err="failed to get container status \"ae292ee1e7d7e4fb0ff63bb805764e2222104fedc9db52c14390d969c7b1bd98\": rpc error: code = NotFound desc = could not find container \"ae292ee1e7d7e4fb0ff63bb805764e2222104fedc9db52c14390d969c7b1bd98\": container with ID starting with ae292ee1e7d7e4fb0ff63bb805764e2222104fedc9db52c14390d969c7b1bd98 not found: ID does not exist" Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.667013 4680 scope.go:117] "RemoveContainer" containerID="cbcf32ae2607652cf2359203c40d31e5b8fe16118878726f1fde254c14431416" Feb 17 18:08:56 crc kubenswrapper[4680]: E0217 18:08:56.667827 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbcf32ae2607652cf2359203c40d31e5b8fe16118878726f1fde254c14431416\": container with ID starting with cbcf32ae2607652cf2359203c40d31e5b8fe16118878726f1fde254c14431416 not found: ID does not exist" containerID="cbcf32ae2607652cf2359203c40d31e5b8fe16118878726f1fde254c14431416" Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.667919 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbcf32ae2607652cf2359203c40d31e5b8fe16118878726f1fde254c14431416"} err="failed to get container status \"cbcf32ae2607652cf2359203c40d31e5b8fe16118878726f1fde254c14431416\": rpc error: code = NotFound desc = could not find container \"cbcf32ae2607652cf2359203c40d31e5b8fe16118878726f1fde254c14431416\": container with ID starting with cbcf32ae2607652cf2359203c40d31e5b8fe16118878726f1fde254c14431416 not found: ID does not exist" Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.667966 4680 scope.go:117] "RemoveContainer" containerID="738bc32c5e0e354e894670a84eb1c99a2164229da2e015501a8c27a42153b40d" Feb 17 18:08:56 crc kubenswrapper[4680]: E0217 18:08:56.668487 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"738bc32c5e0e354e894670a84eb1c99a2164229da2e015501a8c27a42153b40d\": container with ID starting with 738bc32c5e0e354e894670a84eb1c99a2164229da2e015501a8c27a42153b40d not found: ID does not exist" containerID="738bc32c5e0e354e894670a84eb1c99a2164229da2e015501a8c27a42153b40d" Feb 17 18:08:56 crc kubenswrapper[4680]: I0217 18:08:56.668543 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"738bc32c5e0e354e894670a84eb1c99a2164229da2e015501a8c27a42153b40d"} err="failed to get container status \"738bc32c5e0e354e894670a84eb1c99a2164229da2e015501a8c27a42153b40d\": rpc error: code = NotFound desc = could not find container \"738bc32c5e0e354e894670a84eb1c99a2164229da2e015501a8c27a42153b40d\": container with ID starting with 738bc32c5e0e354e894670a84eb1c99a2164229da2e015501a8c27a42153b40d not found: ID does not exist" Feb 17 18:08:57 crc kubenswrapper[4680]: I0217 18:08:57.115704 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43ce48bf-81a3-485d-9017-912197d133df" path="/var/lib/kubelet/pods/43ce48bf-81a3-485d-9017-912197d133df/volumes" Feb 17 18:09:00 crc kubenswrapper[4680]: I0217 18:09:00.442294 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9pzwb"] Feb 17 18:09:00 crc kubenswrapper[4680]: E0217 18:09:00.445056 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ce48bf-81a3-485d-9017-912197d133df" containerName="registry-server" Feb 17 18:09:00 crc kubenswrapper[4680]: I0217 18:09:00.445077 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ce48bf-81a3-485d-9017-912197d133df" containerName="registry-server" Feb 17 18:09:00 crc kubenswrapper[4680]: E0217 18:09:00.445103 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ce48bf-81a3-485d-9017-912197d133df" containerName="extract-utilities" Feb 17 18:09:00 crc kubenswrapper[4680]: I0217 18:09:00.445109 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ce48bf-81a3-485d-9017-912197d133df" containerName="extract-utilities" Feb 17 18:09:00 crc kubenswrapper[4680]: E0217 18:09:00.445148 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ce48bf-81a3-485d-9017-912197d133df" containerName="extract-content" Feb 17 18:09:00 crc kubenswrapper[4680]: I0217 18:09:00.445155 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ce48bf-81a3-485d-9017-912197d133df" containerName="extract-content" Feb 17 18:09:00 crc kubenswrapper[4680]: I0217 18:09:00.445526 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="43ce48bf-81a3-485d-9017-912197d133df" containerName="registry-server" Feb 17 18:09:00 crc kubenswrapper[4680]: I0217 18:09:00.449418 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pzwb" Feb 17 18:09:00 crc kubenswrapper[4680]: I0217 18:09:00.478066 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pzwb"] Feb 17 18:09:00 crc kubenswrapper[4680]: I0217 18:09:00.529637 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1cf446a-2210-45fe-9f3e-21e69941a720-catalog-content\") pod \"redhat-marketplace-9pzwb\" (UID: \"f1cf446a-2210-45fe-9f3e-21e69941a720\") " pod="openshift-marketplace/redhat-marketplace-9pzwb" Feb 17 18:09:00 crc kubenswrapper[4680]: I0217 18:09:00.529775 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27jlm\" (UniqueName: \"kubernetes.io/projected/f1cf446a-2210-45fe-9f3e-21e69941a720-kube-api-access-27jlm\") pod \"redhat-marketplace-9pzwb\" (UID: \"f1cf446a-2210-45fe-9f3e-21e69941a720\") " pod="openshift-marketplace/redhat-marketplace-9pzwb" Feb 17 18:09:00 crc kubenswrapper[4680]: I0217 18:09:00.529960 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1cf446a-2210-45fe-9f3e-21e69941a720-utilities\") pod \"redhat-marketplace-9pzwb\" (UID: \"f1cf446a-2210-45fe-9f3e-21e69941a720\") " pod="openshift-marketplace/redhat-marketplace-9pzwb" Feb 17 18:09:00 crc kubenswrapper[4680]: I0217 18:09:00.631908 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1cf446a-2210-45fe-9f3e-21e69941a720-utilities\") pod \"redhat-marketplace-9pzwb\" (UID: \"f1cf446a-2210-45fe-9f3e-21e69941a720\") " pod="openshift-marketplace/redhat-marketplace-9pzwb" Feb 17 18:09:00 crc kubenswrapper[4680]: I0217 18:09:00.632413 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1cf446a-2210-45fe-9f3e-21e69941a720-utilities\") pod \"redhat-marketplace-9pzwb\" (UID: \"f1cf446a-2210-45fe-9f3e-21e69941a720\") " pod="openshift-marketplace/redhat-marketplace-9pzwb" Feb 17 18:09:00 crc kubenswrapper[4680]: I0217 18:09:00.633687 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1cf446a-2210-45fe-9f3e-21e69941a720-catalog-content\") pod \"redhat-marketplace-9pzwb\" (UID: \"f1cf446a-2210-45fe-9f3e-21e69941a720\") " pod="openshift-marketplace/redhat-marketplace-9pzwb" Feb 17 18:09:00 crc kubenswrapper[4680]: I0217 18:09:00.633793 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27jlm\" (UniqueName: \"kubernetes.io/projected/f1cf446a-2210-45fe-9f3e-21e69941a720-kube-api-access-27jlm\") pod \"redhat-marketplace-9pzwb\" (UID: \"f1cf446a-2210-45fe-9f3e-21e69941a720\") " pod="openshift-marketplace/redhat-marketplace-9pzwb" Feb 17 18:09:00 crc kubenswrapper[4680]: I0217 18:09:00.634847 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1cf446a-2210-45fe-9f3e-21e69941a720-catalog-content\") pod \"redhat-marketplace-9pzwb\" (UID: \"f1cf446a-2210-45fe-9f3e-21e69941a720\") " pod="openshift-marketplace/redhat-marketplace-9pzwb" Feb 17 18:09:00 crc kubenswrapper[4680]: I0217 18:09:00.661165 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27jlm\" (UniqueName: \"kubernetes.io/projected/f1cf446a-2210-45fe-9f3e-21e69941a720-kube-api-access-27jlm\") pod \"redhat-marketplace-9pzwb\" (UID: \"f1cf446a-2210-45fe-9f3e-21e69941a720\") " pod="openshift-marketplace/redhat-marketplace-9pzwb" Feb 17 18:09:00 crc kubenswrapper[4680]: I0217 18:09:00.778131 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pzwb" Feb 17 18:09:01 crc kubenswrapper[4680]: I0217 18:09:01.233716 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pzwb"] Feb 17 18:09:01 crc kubenswrapper[4680]: I0217 18:09:01.335118 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pzwb" event={"ID":"f1cf446a-2210-45fe-9f3e-21e69941a720","Type":"ContainerStarted","Data":"f49c6ee2a04ac3e42b9a68a295d4b72435f2eb0e1f0d49027a8e84d9ee5feee3"} Feb 17 18:09:02 crc kubenswrapper[4680]: I0217 18:09:02.352860 4680 generic.go:334] "Generic (PLEG): container finished" podID="f1cf446a-2210-45fe-9f3e-21e69941a720" containerID="7f09ba5d610afe2dc77642877e7051f3a630e0ab16a66c7155c946860c834f54" exitCode=0 Feb 17 18:09:02 crc kubenswrapper[4680]: I0217 18:09:02.353069 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pzwb" event={"ID":"f1cf446a-2210-45fe-9f3e-21e69941a720","Type":"ContainerDied","Data":"7f09ba5d610afe2dc77642877e7051f3a630e0ab16a66c7155c946860c834f54"} Feb 17 18:09:03 crc kubenswrapper[4680]: I0217 18:09:03.366215 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pzwb" event={"ID":"f1cf446a-2210-45fe-9f3e-21e69941a720","Type":"ContainerStarted","Data":"1f35950b45cfe0735dda87a20232a14cf8e4a8506ea19d626a289ae42091f462"} Feb 17 18:09:04 crc kubenswrapper[4680]: I0217 18:09:04.377132 4680 generic.go:334] "Generic (PLEG): container finished" podID="f1cf446a-2210-45fe-9f3e-21e69941a720" containerID="1f35950b45cfe0735dda87a20232a14cf8e4a8506ea19d626a289ae42091f462" exitCode=0 Feb 17 18:09:04 crc kubenswrapper[4680]: I0217 18:09:04.377181 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pzwb" event={"ID":"f1cf446a-2210-45fe-9f3e-21e69941a720","Type":"ContainerDied","Data":"1f35950b45cfe0735dda87a20232a14cf8e4a8506ea19d626a289ae42091f462"} Feb 17 18:09:05 crc kubenswrapper[4680]: I0217 18:09:05.386963 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pzwb" event={"ID":"f1cf446a-2210-45fe-9f3e-21e69941a720","Type":"ContainerStarted","Data":"4cfa14054a43de50c761fdddd400f5b3111a5499fb8ab7ad928cf11c5005ded1"} Feb 17 18:09:05 crc kubenswrapper[4680]: I0217 18:09:05.403565 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9pzwb" podStartSLOduration=3.011360224 podStartE2EDuration="5.403547752s" podCreationTimestamp="2026-02-17 18:09:00 +0000 UTC" firstStartedPulling="2026-02-17 18:09:02.355496454 +0000 UTC m=+1471.906395133" lastFinishedPulling="2026-02-17 18:09:04.747683982 +0000 UTC m=+1474.298582661" observedRunningTime="2026-02-17 18:09:05.402408846 +0000 UTC m=+1474.953307545" watchObservedRunningTime="2026-02-17 18:09:05.403547752 +0000 UTC m=+1474.954446421" Feb 17 18:09:05 crc kubenswrapper[4680]: I0217 18:09:05.588886 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:09:05 crc kubenswrapper[4680]: I0217 18:09:05.588959 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:09:10 crc kubenswrapper[4680]: I0217 18:09:10.778697 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9pzwb" Feb 17 18:09:10 crc kubenswrapper[4680]: I0217 18:09:10.779236 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9pzwb" Feb 17 18:09:10 crc kubenswrapper[4680]: I0217 18:09:10.825302 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9pzwb" Feb 17 18:09:11 crc kubenswrapper[4680]: I0217 18:09:11.490170 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9pzwb" Feb 17 18:09:11 crc kubenswrapper[4680]: I0217 18:09:11.549620 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pzwb"] Feb 17 18:09:13 crc kubenswrapper[4680]: I0217 18:09:13.462993 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9pzwb" podUID="f1cf446a-2210-45fe-9f3e-21e69941a720" containerName="registry-server" containerID="cri-o://4cfa14054a43de50c761fdddd400f5b3111a5499fb8ab7ad928cf11c5005ded1" gracePeriod=2 Feb 17 18:09:13 crc kubenswrapper[4680]: I0217 18:09:13.900088 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pzwb" Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.090108 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1cf446a-2210-45fe-9f3e-21e69941a720-catalog-content\") pod \"f1cf446a-2210-45fe-9f3e-21e69941a720\" (UID: \"f1cf446a-2210-45fe-9f3e-21e69941a720\") " Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.090164 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27jlm\" (UniqueName: \"kubernetes.io/projected/f1cf446a-2210-45fe-9f3e-21e69941a720-kube-api-access-27jlm\") pod \"f1cf446a-2210-45fe-9f3e-21e69941a720\" (UID: \"f1cf446a-2210-45fe-9f3e-21e69941a720\") " Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.090420 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1cf446a-2210-45fe-9f3e-21e69941a720-utilities\") pod \"f1cf446a-2210-45fe-9f3e-21e69941a720\" (UID: \"f1cf446a-2210-45fe-9f3e-21e69941a720\") " Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.091357 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1cf446a-2210-45fe-9f3e-21e69941a720-utilities" (OuterVolumeSpecName: "utilities") pod "f1cf446a-2210-45fe-9f3e-21e69941a720" (UID: "f1cf446a-2210-45fe-9f3e-21e69941a720"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.099694 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1cf446a-2210-45fe-9f3e-21e69941a720-kube-api-access-27jlm" (OuterVolumeSpecName: "kube-api-access-27jlm") pod "f1cf446a-2210-45fe-9f3e-21e69941a720" (UID: "f1cf446a-2210-45fe-9f3e-21e69941a720"). InnerVolumeSpecName "kube-api-access-27jlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.112473 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1cf446a-2210-45fe-9f3e-21e69941a720-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1cf446a-2210-45fe-9f3e-21e69941a720" (UID: "f1cf446a-2210-45fe-9f3e-21e69941a720"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.192420 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1cf446a-2210-45fe-9f3e-21e69941a720-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.192450 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27jlm\" (UniqueName: \"kubernetes.io/projected/f1cf446a-2210-45fe-9f3e-21e69941a720-kube-api-access-27jlm\") on node \"crc\" DevicePath \"\"" Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.192460 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1cf446a-2210-45fe-9f3e-21e69941a720-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.484299 4680 generic.go:334] "Generic (PLEG): container finished" podID="f1cf446a-2210-45fe-9f3e-21e69941a720" containerID="4cfa14054a43de50c761fdddd400f5b3111a5499fb8ab7ad928cf11c5005ded1" exitCode=0 Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.484651 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pzwb" event={"ID":"f1cf446a-2210-45fe-9f3e-21e69941a720","Type":"ContainerDied","Data":"4cfa14054a43de50c761fdddd400f5b3111a5499fb8ab7ad928cf11c5005ded1"} Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.484675 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pzwb" event={"ID":"f1cf446a-2210-45fe-9f3e-21e69941a720","Type":"ContainerDied","Data":"f49c6ee2a04ac3e42b9a68a295d4b72435f2eb0e1f0d49027a8e84d9ee5feee3"} Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.484689 4680 scope.go:117] "RemoveContainer" containerID="4cfa14054a43de50c761fdddd400f5b3111a5499fb8ab7ad928cf11c5005ded1" Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.484816 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pzwb" Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.527450 4680 scope.go:117] "RemoveContainer" containerID="1f35950b45cfe0735dda87a20232a14cf8e4a8506ea19d626a289ae42091f462" Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.533867 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pzwb"] Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.545169 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pzwb"] Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.564922 4680 scope.go:117] "RemoveContainer" containerID="7f09ba5d610afe2dc77642877e7051f3a630e0ab16a66c7155c946860c834f54" Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.603939 4680 scope.go:117] "RemoveContainer" containerID="4cfa14054a43de50c761fdddd400f5b3111a5499fb8ab7ad928cf11c5005ded1" Feb 17 18:09:14 crc kubenswrapper[4680]: E0217 18:09:14.604395 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cfa14054a43de50c761fdddd400f5b3111a5499fb8ab7ad928cf11c5005ded1\": container with ID starting with 4cfa14054a43de50c761fdddd400f5b3111a5499fb8ab7ad928cf11c5005ded1 not found: ID does not exist" containerID="4cfa14054a43de50c761fdddd400f5b3111a5499fb8ab7ad928cf11c5005ded1" Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.604424 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cfa14054a43de50c761fdddd400f5b3111a5499fb8ab7ad928cf11c5005ded1"} err="failed to get container status \"4cfa14054a43de50c761fdddd400f5b3111a5499fb8ab7ad928cf11c5005ded1\": rpc error: code = NotFound desc = could not find container \"4cfa14054a43de50c761fdddd400f5b3111a5499fb8ab7ad928cf11c5005ded1\": container with ID starting with 4cfa14054a43de50c761fdddd400f5b3111a5499fb8ab7ad928cf11c5005ded1 not found: ID does not exist" Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.604443 4680 scope.go:117] "RemoveContainer" containerID="1f35950b45cfe0735dda87a20232a14cf8e4a8506ea19d626a289ae42091f462" Feb 17 18:09:14 crc kubenswrapper[4680]: E0217 18:09:14.604871 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f35950b45cfe0735dda87a20232a14cf8e4a8506ea19d626a289ae42091f462\": container with ID starting with 1f35950b45cfe0735dda87a20232a14cf8e4a8506ea19d626a289ae42091f462 not found: ID does not exist" containerID="1f35950b45cfe0735dda87a20232a14cf8e4a8506ea19d626a289ae42091f462" Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.604897 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f35950b45cfe0735dda87a20232a14cf8e4a8506ea19d626a289ae42091f462"} err="failed to get container status \"1f35950b45cfe0735dda87a20232a14cf8e4a8506ea19d626a289ae42091f462\": rpc error: code = NotFound desc = could not find container \"1f35950b45cfe0735dda87a20232a14cf8e4a8506ea19d626a289ae42091f462\": container with ID starting with 1f35950b45cfe0735dda87a20232a14cf8e4a8506ea19d626a289ae42091f462 not found: ID does not exist" Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.604914 4680 scope.go:117] "RemoveContainer" containerID="7f09ba5d610afe2dc77642877e7051f3a630e0ab16a66c7155c946860c834f54" Feb 17 18:09:14 crc kubenswrapper[4680]: E0217 18:09:14.605296 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f09ba5d610afe2dc77642877e7051f3a630e0ab16a66c7155c946860c834f54\": container with ID starting with 7f09ba5d610afe2dc77642877e7051f3a630e0ab16a66c7155c946860c834f54 not found: ID does not exist" containerID="7f09ba5d610afe2dc77642877e7051f3a630e0ab16a66c7155c946860c834f54" Feb 17 18:09:14 crc kubenswrapper[4680]: I0217 18:09:14.605358 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f09ba5d610afe2dc77642877e7051f3a630e0ab16a66c7155c946860c834f54"} err="failed to get container status \"7f09ba5d610afe2dc77642877e7051f3a630e0ab16a66c7155c946860c834f54\": rpc error: code = NotFound desc = could not find container \"7f09ba5d610afe2dc77642877e7051f3a630e0ab16a66c7155c946860c834f54\": container with ID starting with 7f09ba5d610afe2dc77642877e7051f3a630e0ab16a66c7155c946860c834f54 not found: ID does not exist" Feb 17 18:09:15 crc kubenswrapper[4680]: I0217 18:09:15.115537 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1cf446a-2210-45fe-9f3e-21e69941a720" path="/var/lib/kubelet/pods/f1cf446a-2210-45fe-9f3e-21e69941a720/volumes" Feb 17 18:09:35 crc kubenswrapper[4680]: I0217 18:09:35.588344 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:09:35 crc kubenswrapper[4680]: I0217 18:09:35.588979 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:09:35 crc kubenswrapper[4680]: I0217 18:09:35.589035 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 18:09:35 crc kubenswrapper[4680]: I0217 18:09:35.589834 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 18:09:35 crc kubenswrapper[4680]: I0217 18:09:35.589905 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" gracePeriod=600 Feb 17 18:09:35 crc kubenswrapper[4680]: E0217 18:09:35.770869 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:09:36 crc kubenswrapper[4680]: I0217 18:09:36.680212 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" exitCode=0 Feb 17 18:09:36 crc kubenswrapper[4680]: I0217 18:09:36.680261 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1"} Feb 17 18:09:36 crc kubenswrapper[4680]: I0217 18:09:36.680343 4680 scope.go:117] "RemoveContainer" containerID="ad1d4ed6f92a7e4627e28b3747e3d87e3433b7897d128c386001f1954f052c13" Feb 17 18:09:36 crc kubenswrapper[4680]: I0217 18:09:36.681532 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:09:36 crc kubenswrapper[4680]: E0217 18:09:36.682308 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:09:48 crc kubenswrapper[4680]: I0217 18:09:48.106654 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:09:48 crc kubenswrapper[4680]: E0217 18:09:48.107751 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:09:52 crc kubenswrapper[4680]: I0217 18:09:52.891598 4680 scope.go:117] "RemoveContainer" containerID="e97ceb3a545bd343c539431e7fb797c52d93980694d292d8edfc8a71e6d34097" Feb 17 18:09:52 crc kubenswrapper[4680]: I0217 18:09:52.913464 4680 scope.go:117] "RemoveContainer" containerID="e990cc9ce1632560b33ecd5097fcd50919af8af5d9eed2978790c71d8ef68ee9" Feb 17 18:10:01 crc kubenswrapper[4680]: I0217 18:10:01.128428 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:10:01 crc kubenswrapper[4680]: E0217 18:10:01.131054 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:10:12 crc kubenswrapper[4680]: I0217 18:10:12.107136 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:10:12 crc kubenswrapper[4680]: E0217 18:10:12.108540 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:10:14 crc kubenswrapper[4680]: I0217 18:10:14.927894 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9tcpm"] Feb 17 18:10:14 crc kubenswrapper[4680]: E0217 18:10:14.928686 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1cf446a-2210-45fe-9f3e-21e69941a720" containerName="extract-content" Feb 17 18:10:14 crc kubenswrapper[4680]: I0217 18:10:14.928706 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1cf446a-2210-45fe-9f3e-21e69941a720" containerName="extract-content" Feb 17 18:10:14 crc kubenswrapper[4680]: E0217 18:10:14.928741 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1cf446a-2210-45fe-9f3e-21e69941a720" containerName="registry-server" Feb 17 18:10:14 crc kubenswrapper[4680]: I0217 18:10:14.928750 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1cf446a-2210-45fe-9f3e-21e69941a720" containerName="registry-server" Feb 17 18:10:14 crc kubenswrapper[4680]: E0217 18:10:14.928773 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1cf446a-2210-45fe-9f3e-21e69941a720" containerName="extract-utilities" Feb 17 18:10:14 crc kubenswrapper[4680]: I0217 18:10:14.928783 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1cf446a-2210-45fe-9f3e-21e69941a720" containerName="extract-utilities" Feb 17 18:10:14 crc kubenswrapper[4680]: I0217 18:10:14.928996 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1cf446a-2210-45fe-9f3e-21e69941a720" containerName="registry-server" Feb 17 18:10:14 crc kubenswrapper[4680]: I0217 18:10:14.930782 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9tcpm" Feb 17 18:10:14 crc kubenswrapper[4680]: I0217 18:10:14.946003 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9tcpm"] Feb 17 18:10:15 crc kubenswrapper[4680]: I0217 18:10:15.027818 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpbd7\" (UniqueName: \"kubernetes.io/projected/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-kube-api-access-vpbd7\") pod \"certified-operators-9tcpm\" (UID: \"f643cbdd-2cfa-4853-81a1-ebcc96a498b9\") " pod="openshift-marketplace/certified-operators-9tcpm" Feb 17 18:10:15 crc kubenswrapper[4680]: I0217 18:10:15.027890 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-catalog-content\") pod \"certified-operators-9tcpm\" (UID: \"f643cbdd-2cfa-4853-81a1-ebcc96a498b9\") " pod="openshift-marketplace/certified-operators-9tcpm" Feb 17 18:10:15 crc kubenswrapper[4680]: I0217 18:10:15.027912 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-utilities\") pod \"certified-operators-9tcpm\" (UID: \"f643cbdd-2cfa-4853-81a1-ebcc96a498b9\") " pod="openshift-marketplace/certified-operators-9tcpm" Feb 17 18:10:15 crc kubenswrapper[4680]: I0217 18:10:15.129422 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpbd7\" (UniqueName: \"kubernetes.io/projected/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-kube-api-access-vpbd7\") pod \"certified-operators-9tcpm\" (UID: \"f643cbdd-2cfa-4853-81a1-ebcc96a498b9\") " pod="openshift-marketplace/certified-operators-9tcpm" Feb 17 18:10:15 crc kubenswrapper[4680]: I0217 18:10:15.129779 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-catalog-content\") pod \"certified-operators-9tcpm\" (UID: \"f643cbdd-2cfa-4853-81a1-ebcc96a498b9\") " pod="openshift-marketplace/certified-operators-9tcpm" Feb 17 18:10:15 crc kubenswrapper[4680]: I0217 18:10:15.129907 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-utilities\") pod \"certified-operators-9tcpm\" (UID: \"f643cbdd-2cfa-4853-81a1-ebcc96a498b9\") " pod="openshift-marketplace/certified-operators-9tcpm" Feb 17 18:10:15 crc kubenswrapper[4680]: I0217 18:10:15.130360 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-catalog-content\") pod \"certified-operators-9tcpm\" (UID: \"f643cbdd-2cfa-4853-81a1-ebcc96a498b9\") " pod="openshift-marketplace/certified-operators-9tcpm" Feb 17 18:10:15 crc kubenswrapper[4680]: I0217 18:10:15.130419 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-utilities\") pod \"certified-operators-9tcpm\" (UID: \"f643cbdd-2cfa-4853-81a1-ebcc96a498b9\") " pod="openshift-marketplace/certified-operators-9tcpm" Feb 17 18:10:15 crc kubenswrapper[4680]: I0217 18:10:15.148208 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpbd7\" (UniqueName: \"kubernetes.io/projected/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-kube-api-access-vpbd7\") pod \"certified-operators-9tcpm\" (UID: \"f643cbdd-2cfa-4853-81a1-ebcc96a498b9\") " pod="openshift-marketplace/certified-operators-9tcpm" Feb 17 18:10:15 crc kubenswrapper[4680]: I0217 18:10:15.252447 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9tcpm" Feb 17 18:10:15 crc kubenswrapper[4680]: I0217 18:10:15.773107 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9tcpm"] Feb 17 18:10:16 crc kubenswrapper[4680]: I0217 18:10:16.017916 4680 generic.go:334] "Generic (PLEG): container finished" podID="f643cbdd-2cfa-4853-81a1-ebcc96a498b9" containerID="b2a0b0b5ac6402c762b155070e975562a5a54d7be2f63b604a95d5fc8d50b300" exitCode=0 Feb 17 18:10:16 crc kubenswrapper[4680]: I0217 18:10:16.017961 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tcpm" event={"ID":"f643cbdd-2cfa-4853-81a1-ebcc96a498b9","Type":"ContainerDied","Data":"b2a0b0b5ac6402c762b155070e975562a5a54d7be2f63b604a95d5fc8d50b300"} Feb 17 18:10:16 crc kubenswrapper[4680]: I0217 18:10:16.017988 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tcpm" event={"ID":"f643cbdd-2cfa-4853-81a1-ebcc96a498b9","Type":"ContainerStarted","Data":"87b24432ed6122ebb8ce3dddf1a9ba23d19c220de57ca75120a7eaaf99b6ca9a"} Feb 17 18:10:17 crc kubenswrapper[4680]: I0217 18:10:17.027964 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tcpm" event={"ID":"f643cbdd-2cfa-4853-81a1-ebcc96a498b9","Type":"ContainerStarted","Data":"e27b4095843358e0d497cb02d4e291001ccd7ba2dfa8ad33df4a6dfb79733b4a"} Feb 17 18:10:19 crc kubenswrapper[4680]: I0217 18:10:19.045630 4680 generic.go:334] "Generic (PLEG): container finished" podID="f643cbdd-2cfa-4853-81a1-ebcc96a498b9" containerID="e27b4095843358e0d497cb02d4e291001ccd7ba2dfa8ad33df4a6dfb79733b4a" exitCode=0 Feb 17 18:10:19 crc kubenswrapper[4680]: I0217 18:10:19.045715 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tcpm" event={"ID":"f643cbdd-2cfa-4853-81a1-ebcc96a498b9","Type":"ContainerDied","Data":"e27b4095843358e0d497cb02d4e291001ccd7ba2dfa8ad33df4a6dfb79733b4a"} Feb 17 18:10:20 crc kubenswrapper[4680]: I0217 18:10:20.057943 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tcpm" event={"ID":"f643cbdd-2cfa-4853-81a1-ebcc96a498b9","Type":"ContainerStarted","Data":"96437ae1827b7020dc5b91e2bab2f89ab1dd34d198f0c02f6d2a065dcd72dec7"} Feb 17 18:10:20 crc kubenswrapper[4680]: I0217 18:10:20.079803 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9tcpm" podStartSLOduration=2.540041238 podStartE2EDuration="6.079786427s" podCreationTimestamp="2026-02-17 18:10:14 +0000 UTC" firstStartedPulling="2026-02-17 18:10:16.019822752 +0000 UTC m=+1545.570721431" lastFinishedPulling="2026-02-17 18:10:19.559567941 +0000 UTC m=+1549.110466620" observedRunningTime="2026-02-17 18:10:20.073722248 +0000 UTC m=+1549.624620927" watchObservedRunningTime="2026-02-17 18:10:20.079786427 +0000 UTC m=+1549.630685106" Feb 17 18:10:23 crc kubenswrapper[4680]: I0217 18:10:23.105740 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:10:23 crc kubenswrapper[4680]: E0217 18:10:23.106017 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:10:25 crc kubenswrapper[4680]: I0217 18:10:25.253149 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9tcpm" Feb 17 18:10:25 crc kubenswrapper[4680]: I0217 18:10:25.253532 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9tcpm" Feb 17 18:10:25 crc kubenswrapper[4680]: I0217 18:10:25.300817 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9tcpm" Feb 17 18:10:26 crc kubenswrapper[4680]: I0217 18:10:26.145645 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9tcpm" Feb 17 18:10:26 crc kubenswrapper[4680]: I0217 18:10:26.203368 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9tcpm"] Feb 17 18:10:28 crc kubenswrapper[4680]: I0217 18:10:28.120475 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9tcpm" podUID="f643cbdd-2cfa-4853-81a1-ebcc96a498b9" containerName="registry-server" containerID="cri-o://96437ae1827b7020dc5b91e2bab2f89ab1dd34d198f0c02f6d2a065dcd72dec7" gracePeriod=2 Feb 17 18:10:28 crc kubenswrapper[4680]: I0217 18:10:28.607186 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9tcpm" Feb 17 18:10:28 crc kubenswrapper[4680]: I0217 18:10:28.674819 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpbd7\" (UniqueName: \"kubernetes.io/projected/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-kube-api-access-vpbd7\") pod \"f643cbdd-2cfa-4853-81a1-ebcc96a498b9\" (UID: \"f643cbdd-2cfa-4853-81a1-ebcc96a498b9\") " Feb 17 18:10:28 crc kubenswrapper[4680]: I0217 18:10:28.674879 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-utilities\") pod \"f643cbdd-2cfa-4853-81a1-ebcc96a498b9\" (UID: \"f643cbdd-2cfa-4853-81a1-ebcc96a498b9\") " Feb 17 18:10:28 crc kubenswrapper[4680]: I0217 18:10:28.675073 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-catalog-content\") pod \"f643cbdd-2cfa-4853-81a1-ebcc96a498b9\" (UID: \"f643cbdd-2cfa-4853-81a1-ebcc96a498b9\") " Feb 17 18:10:28 crc kubenswrapper[4680]: I0217 18:10:28.676661 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-utilities" (OuterVolumeSpecName: "utilities") pod "f643cbdd-2cfa-4853-81a1-ebcc96a498b9" (UID: "f643cbdd-2cfa-4853-81a1-ebcc96a498b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:10:28 crc kubenswrapper[4680]: I0217 18:10:28.688239 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-kube-api-access-vpbd7" (OuterVolumeSpecName: "kube-api-access-vpbd7") pod "f643cbdd-2cfa-4853-81a1-ebcc96a498b9" (UID: "f643cbdd-2cfa-4853-81a1-ebcc96a498b9"). InnerVolumeSpecName "kube-api-access-vpbd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:10:28 crc kubenswrapper[4680]: I0217 18:10:28.730940 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f643cbdd-2cfa-4853-81a1-ebcc96a498b9" (UID: "f643cbdd-2cfa-4853-81a1-ebcc96a498b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:10:28 crc kubenswrapper[4680]: I0217 18:10:28.776905 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpbd7\" (UniqueName: \"kubernetes.io/projected/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-kube-api-access-vpbd7\") on node \"crc\" DevicePath \"\"" Feb 17 18:10:28 crc kubenswrapper[4680]: I0217 18:10:28.776945 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:10:28 crc kubenswrapper[4680]: I0217 18:10:28.776954 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f643cbdd-2cfa-4853-81a1-ebcc96a498b9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:10:29 crc kubenswrapper[4680]: I0217 18:10:29.132115 4680 generic.go:334] "Generic (PLEG): container finished" podID="f643cbdd-2cfa-4853-81a1-ebcc96a498b9" containerID="96437ae1827b7020dc5b91e2bab2f89ab1dd34d198f0c02f6d2a065dcd72dec7" exitCode=0 Feb 17 18:10:29 crc kubenswrapper[4680]: I0217 18:10:29.132157 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9tcpm" Feb 17 18:10:29 crc kubenswrapper[4680]: I0217 18:10:29.132162 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tcpm" event={"ID":"f643cbdd-2cfa-4853-81a1-ebcc96a498b9","Type":"ContainerDied","Data":"96437ae1827b7020dc5b91e2bab2f89ab1dd34d198f0c02f6d2a065dcd72dec7"} Feb 17 18:10:29 crc kubenswrapper[4680]: I0217 18:10:29.132215 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tcpm" event={"ID":"f643cbdd-2cfa-4853-81a1-ebcc96a498b9","Type":"ContainerDied","Data":"87b24432ed6122ebb8ce3dddf1a9ba23d19c220de57ca75120a7eaaf99b6ca9a"} Feb 17 18:10:29 crc kubenswrapper[4680]: I0217 18:10:29.132233 4680 scope.go:117] "RemoveContainer" containerID="96437ae1827b7020dc5b91e2bab2f89ab1dd34d198f0c02f6d2a065dcd72dec7" Feb 17 18:10:29 crc kubenswrapper[4680]: I0217 18:10:29.160994 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9tcpm"] Feb 17 18:10:29 crc kubenswrapper[4680]: I0217 18:10:29.170417 4680 scope.go:117] "RemoveContainer" containerID="e27b4095843358e0d497cb02d4e291001ccd7ba2dfa8ad33df4a6dfb79733b4a" Feb 17 18:10:29 crc kubenswrapper[4680]: I0217 18:10:29.170460 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9tcpm"] Feb 17 18:10:29 crc kubenswrapper[4680]: I0217 18:10:29.193538 4680 scope.go:117] "RemoveContainer" containerID="b2a0b0b5ac6402c762b155070e975562a5a54d7be2f63b604a95d5fc8d50b300" Feb 17 18:10:29 crc kubenswrapper[4680]: I0217 18:10:29.234771 4680 scope.go:117] "RemoveContainer" containerID="96437ae1827b7020dc5b91e2bab2f89ab1dd34d198f0c02f6d2a065dcd72dec7" Feb 17 18:10:29 crc kubenswrapper[4680]: E0217 18:10:29.235360 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96437ae1827b7020dc5b91e2bab2f89ab1dd34d198f0c02f6d2a065dcd72dec7\": container with ID starting with 96437ae1827b7020dc5b91e2bab2f89ab1dd34d198f0c02f6d2a065dcd72dec7 not found: ID does not exist" containerID="96437ae1827b7020dc5b91e2bab2f89ab1dd34d198f0c02f6d2a065dcd72dec7" Feb 17 18:10:29 crc kubenswrapper[4680]: I0217 18:10:29.235416 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96437ae1827b7020dc5b91e2bab2f89ab1dd34d198f0c02f6d2a065dcd72dec7"} err="failed to get container status \"96437ae1827b7020dc5b91e2bab2f89ab1dd34d198f0c02f6d2a065dcd72dec7\": rpc error: code = NotFound desc = could not find container \"96437ae1827b7020dc5b91e2bab2f89ab1dd34d198f0c02f6d2a065dcd72dec7\": container with ID starting with 96437ae1827b7020dc5b91e2bab2f89ab1dd34d198f0c02f6d2a065dcd72dec7 not found: ID does not exist" Feb 17 18:10:29 crc kubenswrapper[4680]: I0217 18:10:29.235442 4680 scope.go:117] "RemoveContainer" containerID="e27b4095843358e0d497cb02d4e291001ccd7ba2dfa8ad33df4a6dfb79733b4a" Feb 17 18:10:29 crc kubenswrapper[4680]: E0217 18:10:29.235953 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e27b4095843358e0d497cb02d4e291001ccd7ba2dfa8ad33df4a6dfb79733b4a\": container with ID starting with e27b4095843358e0d497cb02d4e291001ccd7ba2dfa8ad33df4a6dfb79733b4a not found: ID does not exist" containerID="e27b4095843358e0d497cb02d4e291001ccd7ba2dfa8ad33df4a6dfb79733b4a" Feb 17 18:10:29 crc kubenswrapper[4680]: I0217 18:10:29.235987 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e27b4095843358e0d497cb02d4e291001ccd7ba2dfa8ad33df4a6dfb79733b4a"} err="failed to get container status \"e27b4095843358e0d497cb02d4e291001ccd7ba2dfa8ad33df4a6dfb79733b4a\": rpc error: code = NotFound desc = could not find container \"e27b4095843358e0d497cb02d4e291001ccd7ba2dfa8ad33df4a6dfb79733b4a\": container with ID starting with e27b4095843358e0d497cb02d4e291001ccd7ba2dfa8ad33df4a6dfb79733b4a not found: ID does not exist" Feb 17 18:10:29 crc kubenswrapper[4680]: I0217 18:10:29.236018 4680 scope.go:117] "RemoveContainer" containerID="b2a0b0b5ac6402c762b155070e975562a5a54d7be2f63b604a95d5fc8d50b300" Feb 17 18:10:29 crc kubenswrapper[4680]: E0217 18:10:29.236280 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2a0b0b5ac6402c762b155070e975562a5a54d7be2f63b604a95d5fc8d50b300\": container with ID starting with b2a0b0b5ac6402c762b155070e975562a5a54d7be2f63b604a95d5fc8d50b300 not found: ID does not exist" containerID="b2a0b0b5ac6402c762b155070e975562a5a54d7be2f63b604a95d5fc8d50b300" Feb 17 18:10:29 crc kubenswrapper[4680]: I0217 18:10:29.236309 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a0b0b5ac6402c762b155070e975562a5a54d7be2f63b604a95d5fc8d50b300"} err="failed to get container status \"b2a0b0b5ac6402c762b155070e975562a5a54d7be2f63b604a95d5fc8d50b300\": rpc error: code = NotFound desc = could not find container \"b2a0b0b5ac6402c762b155070e975562a5a54d7be2f63b604a95d5fc8d50b300\": container with ID starting with b2a0b0b5ac6402c762b155070e975562a5a54d7be2f63b604a95d5fc8d50b300 not found: ID does not exist" Feb 17 18:10:31 crc kubenswrapper[4680]: I0217 18:10:31.138128 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f643cbdd-2cfa-4853-81a1-ebcc96a498b9" path="/var/lib/kubelet/pods/f643cbdd-2cfa-4853-81a1-ebcc96a498b9/volumes" Feb 17 18:10:35 crc kubenswrapper[4680]: I0217 18:10:35.106428 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:10:35 crc kubenswrapper[4680]: E0217 18:10:35.107451 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:10:46 crc kubenswrapper[4680]: I0217 18:10:46.105060 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:10:46 crc kubenswrapper[4680]: E0217 18:10:46.105856 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:11:01 crc kubenswrapper[4680]: I0217 18:11:01.111306 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:11:01 crc kubenswrapper[4680]: E0217 18:11:01.112024 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:11:02 crc kubenswrapper[4680]: I0217 18:11:02.586161 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-87nfv"] Feb 17 18:11:02 crc kubenswrapper[4680]: E0217 18:11:02.586845 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f643cbdd-2cfa-4853-81a1-ebcc96a498b9" containerName="registry-server" Feb 17 18:11:02 crc kubenswrapper[4680]: I0217 18:11:02.586857 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f643cbdd-2cfa-4853-81a1-ebcc96a498b9" containerName="registry-server" Feb 17 18:11:02 crc kubenswrapper[4680]: E0217 18:11:02.586872 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f643cbdd-2cfa-4853-81a1-ebcc96a498b9" containerName="extract-content" Feb 17 18:11:02 crc kubenswrapper[4680]: I0217 18:11:02.586877 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f643cbdd-2cfa-4853-81a1-ebcc96a498b9" containerName="extract-content" Feb 17 18:11:02 crc kubenswrapper[4680]: E0217 18:11:02.586898 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f643cbdd-2cfa-4853-81a1-ebcc96a498b9" containerName="extract-utilities" Feb 17 18:11:02 crc kubenswrapper[4680]: I0217 18:11:02.586903 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f643cbdd-2cfa-4853-81a1-ebcc96a498b9" containerName="extract-utilities" Feb 17 18:11:02 crc kubenswrapper[4680]: I0217 18:11:02.587062 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f643cbdd-2cfa-4853-81a1-ebcc96a498b9" containerName="registry-server" Feb 17 18:11:02 crc kubenswrapper[4680]: I0217 18:11:02.588368 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87nfv" Feb 17 18:11:02 crc kubenswrapper[4680]: I0217 18:11:02.597184 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-87nfv"] Feb 17 18:11:02 crc kubenswrapper[4680]: I0217 18:11:02.710591 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180c5d85-119e-4e94-b965-f6279dd01726-catalog-content\") pod \"redhat-operators-87nfv\" (UID: \"180c5d85-119e-4e94-b965-f6279dd01726\") " pod="openshift-marketplace/redhat-operators-87nfv" Feb 17 18:11:02 crc kubenswrapper[4680]: I0217 18:11:02.710725 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6nrh\" (UniqueName: \"kubernetes.io/projected/180c5d85-119e-4e94-b965-f6279dd01726-kube-api-access-v6nrh\") pod \"redhat-operators-87nfv\" (UID: \"180c5d85-119e-4e94-b965-f6279dd01726\") " pod="openshift-marketplace/redhat-operators-87nfv" Feb 17 18:11:02 crc kubenswrapper[4680]: I0217 18:11:02.710786 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180c5d85-119e-4e94-b965-f6279dd01726-utilities\") pod \"redhat-operators-87nfv\" (UID: \"180c5d85-119e-4e94-b965-f6279dd01726\") " pod="openshift-marketplace/redhat-operators-87nfv" Feb 17 18:11:02 crc kubenswrapper[4680]: I0217 18:11:02.812545 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180c5d85-119e-4e94-b965-f6279dd01726-utilities\") pod \"redhat-operators-87nfv\" (UID: \"180c5d85-119e-4e94-b965-f6279dd01726\") " pod="openshift-marketplace/redhat-operators-87nfv" Feb 17 18:11:02 crc kubenswrapper[4680]: I0217 18:11:02.812700 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180c5d85-119e-4e94-b965-f6279dd01726-catalog-content\") pod \"redhat-operators-87nfv\" (UID: \"180c5d85-119e-4e94-b965-f6279dd01726\") " pod="openshift-marketplace/redhat-operators-87nfv" Feb 17 18:11:02 crc kubenswrapper[4680]: I0217 18:11:02.812760 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6nrh\" (UniqueName: \"kubernetes.io/projected/180c5d85-119e-4e94-b965-f6279dd01726-kube-api-access-v6nrh\") pod \"redhat-operators-87nfv\" (UID: \"180c5d85-119e-4e94-b965-f6279dd01726\") " pod="openshift-marketplace/redhat-operators-87nfv" Feb 17 18:11:02 crc kubenswrapper[4680]: I0217 18:11:02.813508 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180c5d85-119e-4e94-b965-f6279dd01726-catalog-content\") pod \"redhat-operators-87nfv\" (UID: \"180c5d85-119e-4e94-b965-f6279dd01726\") " pod="openshift-marketplace/redhat-operators-87nfv" Feb 17 18:11:02 crc kubenswrapper[4680]: I0217 18:11:02.813766 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180c5d85-119e-4e94-b965-f6279dd01726-utilities\") pod \"redhat-operators-87nfv\" (UID: \"180c5d85-119e-4e94-b965-f6279dd01726\") " pod="openshift-marketplace/redhat-operators-87nfv" Feb 17 18:11:02 crc kubenswrapper[4680]: I0217 18:11:02.841275 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6nrh\" (UniqueName: \"kubernetes.io/projected/180c5d85-119e-4e94-b965-f6279dd01726-kube-api-access-v6nrh\") pod \"redhat-operators-87nfv\" (UID: \"180c5d85-119e-4e94-b965-f6279dd01726\") " pod="openshift-marketplace/redhat-operators-87nfv" Feb 17 18:11:02 crc kubenswrapper[4680]: I0217 18:11:02.973616 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87nfv" Feb 17 18:11:03 crc kubenswrapper[4680]: I0217 18:11:03.484841 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-87nfv"] Feb 17 18:11:04 crc kubenswrapper[4680]: I0217 18:11:04.426725 4680 generic.go:334] "Generic (PLEG): container finished" podID="180c5d85-119e-4e94-b965-f6279dd01726" containerID="baebab24489295326b5f69913a7bbda834af70681e02a275ef89bc10136d80e4" exitCode=0 Feb 17 18:11:04 crc kubenswrapper[4680]: I0217 18:11:04.426758 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87nfv" event={"ID":"180c5d85-119e-4e94-b965-f6279dd01726","Type":"ContainerDied","Data":"baebab24489295326b5f69913a7bbda834af70681e02a275ef89bc10136d80e4"} Feb 17 18:11:04 crc kubenswrapper[4680]: I0217 18:11:04.427030 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87nfv" event={"ID":"180c5d85-119e-4e94-b965-f6279dd01726","Type":"ContainerStarted","Data":"531e6e22af533abf18ed5ef32eb13392e7fc713b8d89a5029abeb734397ac69d"} Feb 17 18:11:04 crc kubenswrapper[4680]: I0217 18:11:04.429630 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 18:11:06 crc kubenswrapper[4680]: I0217 18:11:06.511039 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87nfv" event={"ID":"180c5d85-119e-4e94-b965-f6279dd01726","Type":"ContainerStarted","Data":"c29ac5fce88ad0be0fdbd681f3272f046982ee5b6c2d504ce8e1e8d00f0440be"} Feb 17 18:11:11 crc kubenswrapper[4680]: I0217 18:11:11.561009 4680 generic.go:334] "Generic (PLEG): container finished" podID="180c5d85-119e-4e94-b965-f6279dd01726" containerID="c29ac5fce88ad0be0fdbd681f3272f046982ee5b6c2d504ce8e1e8d00f0440be" exitCode=0 Feb 17 18:11:11 crc kubenswrapper[4680]: I0217 18:11:11.561102 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87nfv" event={"ID":"180c5d85-119e-4e94-b965-f6279dd01726","Type":"ContainerDied","Data":"c29ac5fce88ad0be0fdbd681f3272f046982ee5b6c2d504ce8e1e8d00f0440be"} Feb 17 18:11:12 crc kubenswrapper[4680]: I0217 18:11:12.105418 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:11:12 crc kubenswrapper[4680]: E0217 18:11:12.105664 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:11:12 crc kubenswrapper[4680]: I0217 18:11:12.573482 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87nfv" event={"ID":"180c5d85-119e-4e94-b965-f6279dd01726","Type":"ContainerStarted","Data":"93ef9c570d4eb213bdcf246683f020fd9170ef408d32d855c24dbdaa950f057e"} Feb 17 18:11:12 crc kubenswrapper[4680]: I0217 18:11:12.597002 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-87nfv" podStartSLOduration=3.012369689 podStartE2EDuration="10.596979227s" podCreationTimestamp="2026-02-17 18:11:02 +0000 UTC" firstStartedPulling="2026-02-17 18:11:04.429327746 +0000 UTC m=+1593.980226425" lastFinishedPulling="2026-02-17 18:11:12.013937284 +0000 UTC m=+1601.564835963" observedRunningTime="2026-02-17 18:11:12.595158951 +0000 UTC m=+1602.146057630" watchObservedRunningTime="2026-02-17 18:11:12.596979227 +0000 UTC m=+1602.147877906" Feb 17 18:11:12 crc kubenswrapper[4680]: I0217 18:11:12.974291 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-87nfv" Feb 17 18:11:12 crc kubenswrapper[4680]: I0217 18:11:12.974349 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-87nfv" Feb 17 18:11:14 crc kubenswrapper[4680]: I0217 18:11:14.049491 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-87nfv" podUID="180c5d85-119e-4e94-b965-f6279dd01726" containerName="registry-server" probeResult="failure" output=< Feb 17 18:11:14 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 18:11:14 crc kubenswrapper[4680]: > Feb 17 18:11:23 crc kubenswrapper[4680]: I0217 18:11:23.024346 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-87nfv" Feb 17 18:11:23 crc kubenswrapper[4680]: I0217 18:11:23.074437 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-87nfv" Feb 17 18:11:23 crc kubenswrapper[4680]: I0217 18:11:23.105348 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:11:23 crc kubenswrapper[4680]: E0217 18:11:23.105631 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:11:23 crc kubenswrapper[4680]: I0217 18:11:23.267782 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-87nfv"] Feb 17 18:11:24 crc kubenswrapper[4680]: I0217 18:11:24.676303 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-87nfv" podUID="180c5d85-119e-4e94-b965-f6279dd01726" containerName="registry-server" containerID="cri-o://93ef9c570d4eb213bdcf246683f020fd9170ef408d32d855c24dbdaa950f057e" gracePeriod=2 Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.288339 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87nfv" Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.454649 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180c5d85-119e-4e94-b965-f6279dd01726-utilities\") pod \"180c5d85-119e-4e94-b965-f6279dd01726\" (UID: \"180c5d85-119e-4e94-b965-f6279dd01726\") " Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.454923 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6nrh\" (UniqueName: \"kubernetes.io/projected/180c5d85-119e-4e94-b965-f6279dd01726-kube-api-access-v6nrh\") pod \"180c5d85-119e-4e94-b965-f6279dd01726\" (UID: \"180c5d85-119e-4e94-b965-f6279dd01726\") " Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.455013 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180c5d85-119e-4e94-b965-f6279dd01726-catalog-content\") pod \"180c5d85-119e-4e94-b965-f6279dd01726\" (UID: \"180c5d85-119e-4e94-b965-f6279dd01726\") " Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.455750 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/180c5d85-119e-4e94-b965-f6279dd01726-utilities" (OuterVolumeSpecName: "utilities") pod "180c5d85-119e-4e94-b965-f6279dd01726" (UID: "180c5d85-119e-4e94-b965-f6279dd01726"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.490887 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/180c5d85-119e-4e94-b965-f6279dd01726-kube-api-access-v6nrh" (OuterVolumeSpecName: "kube-api-access-v6nrh") pod "180c5d85-119e-4e94-b965-f6279dd01726" (UID: "180c5d85-119e-4e94-b965-f6279dd01726"). InnerVolumeSpecName "kube-api-access-v6nrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.557458 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6nrh\" (UniqueName: \"kubernetes.io/projected/180c5d85-119e-4e94-b965-f6279dd01726-kube-api-access-v6nrh\") on node \"crc\" DevicePath \"\"" Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.557702 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180c5d85-119e-4e94-b965-f6279dd01726-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.577161 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/180c5d85-119e-4e94-b965-f6279dd01726-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "180c5d85-119e-4e94-b965-f6279dd01726" (UID: "180c5d85-119e-4e94-b965-f6279dd01726"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.659776 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180c5d85-119e-4e94-b965-f6279dd01726-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.685943 4680 generic.go:334] "Generic (PLEG): container finished" podID="180c5d85-119e-4e94-b965-f6279dd01726" containerID="93ef9c570d4eb213bdcf246683f020fd9170ef408d32d855c24dbdaa950f057e" exitCode=0 Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.685990 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87nfv" event={"ID":"180c5d85-119e-4e94-b965-f6279dd01726","Type":"ContainerDied","Data":"93ef9c570d4eb213bdcf246683f020fd9170ef408d32d855c24dbdaa950f057e"} Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.686021 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87nfv" event={"ID":"180c5d85-119e-4e94-b965-f6279dd01726","Type":"ContainerDied","Data":"531e6e22af533abf18ed5ef32eb13392e7fc713b8d89a5029abeb734397ac69d"} Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.686038 4680 scope.go:117] "RemoveContainer" containerID="93ef9c570d4eb213bdcf246683f020fd9170ef408d32d855c24dbdaa950f057e" Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.686060 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87nfv" Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.713279 4680 scope.go:117] "RemoveContainer" containerID="c29ac5fce88ad0be0fdbd681f3272f046982ee5b6c2d504ce8e1e8d00f0440be" Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.753642 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-87nfv"] Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.762303 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-87nfv"] Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.775554 4680 scope.go:117] "RemoveContainer" containerID="baebab24489295326b5f69913a7bbda834af70681e02a275ef89bc10136d80e4" Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.795854 4680 scope.go:117] "RemoveContainer" containerID="93ef9c570d4eb213bdcf246683f020fd9170ef408d32d855c24dbdaa950f057e" Feb 17 18:11:25 crc kubenswrapper[4680]: E0217 18:11:25.796495 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93ef9c570d4eb213bdcf246683f020fd9170ef408d32d855c24dbdaa950f057e\": container with ID starting with 93ef9c570d4eb213bdcf246683f020fd9170ef408d32d855c24dbdaa950f057e not found: ID does not exist" containerID="93ef9c570d4eb213bdcf246683f020fd9170ef408d32d855c24dbdaa950f057e" Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.796521 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ef9c570d4eb213bdcf246683f020fd9170ef408d32d855c24dbdaa950f057e"} err="failed to get container status \"93ef9c570d4eb213bdcf246683f020fd9170ef408d32d855c24dbdaa950f057e\": rpc error: code = NotFound desc = could not find container \"93ef9c570d4eb213bdcf246683f020fd9170ef408d32d855c24dbdaa950f057e\": container with ID starting with 93ef9c570d4eb213bdcf246683f020fd9170ef408d32d855c24dbdaa950f057e not found: ID does not exist" Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.796558 4680 scope.go:117] "RemoveContainer" containerID="c29ac5fce88ad0be0fdbd681f3272f046982ee5b6c2d504ce8e1e8d00f0440be" Feb 17 18:11:25 crc kubenswrapper[4680]: E0217 18:11:25.796884 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c29ac5fce88ad0be0fdbd681f3272f046982ee5b6c2d504ce8e1e8d00f0440be\": container with ID starting with c29ac5fce88ad0be0fdbd681f3272f046982ee5b6c2d504ce8e1e8d00f0440be not found: ID does not exist" containerID="c29ac5fce88ad0be0fdbd681f3272f046982ee5b6c2d504ce8e1e8d00f0440be" Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.796933 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c29ac5fce88ad0be0fdbd681f3272f046982ee5b6c2d504ce8e1e8d00f0440be"} err="failed to get container status \"c29ac5fce88ad0be0fdbd681f3272f046982ee5b6c2d504ce8e1e8d00f0440be\": rpc error: code = NotFound desc = could not find container \"c29ac5fce88ad0be0fdbd681f3272f046982ee5b6c2d504ce8e1e8d00f0440be\": container with ID starting with c29ac5fce88ad0be0fdbd681f3272f046982ee5b6c2d504ce8e1e8d00f0440be not found: ID does not exist" Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.796969 4680 scope.go:117] "RemoveContainer" containerID="baebab24489295326b5f69913a7bbda834af70681e02a275ef89bc10136d80e4" Feb 17 18:11:25 crc kubenswrapper[4680]: E0217 18:11:25.797928 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baebab24489295326b5f69913a7bbda834af70681e02a275ef89bc10136d80e4\": container with ID starting with baebab24489295326b5f69913a7bbda834af70681e02a275ef89bc10136d80e4 not found: ID does not exist" containerID="baebab24489295326b5f69913a7bbda834af70681e02a275ef89bc10136d80e4" Feb 17 18:11:25 crc kubenswrapper[4680]: I0217 18:11:25.797967 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baebab24489295326b5f69913a7bbda834af70681e02a275ef89bc10136d80e4"} err="failed to get container status \"baebab24489295326b5f69913a7bbda834af70681e02a275ef89bc10136d80e4\": rpc error: code = NotFound desc = could not find container \"baebab24489295326b5f69913a7bbda834af70681e02a275ef89bc10136d80e4\": container with ID starting with baebab24489295326b5f69913a7bbda834af70681e02a275ef89bc10136d80e4 not found: ID does not exist" Feb 17 18:11:27 crc kubenswrapper[4680]: I0217 18:11:27.115705 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="180c5d85-119e-4e94-b965-f6279dd01726" path="/var/lib/kubelet/pods/180c5d85-119e-4e94-b965-f6279dd01726/volumes" Feb 17 18:11:30 crc kubenswrapper[4680]: I0217 18:11:30.067114 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-lrlzd"] Feb 17 18:11:30 crc kubenswrapper[4680]: I0217 18:11:30.080283 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3cf6-account-create-update-rhwp9"] Feb 17 18:11:30 crc kubenswrapper[4680]: I0217 18:11:30.090114 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-gh6l2"] Feb 17 18:11:30 crc kubenswrapper[4680]: I0217 18:11:30.098696 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-dbbf-account-create-update-v8ksh"] Feb 17 18:11:30 crc kubenswrapper[4680]: I0217 18:11:30.110155 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-fdc1-account-create-update-zp4v9"] Feb 17 18:11:30 crc kubenswrapper[4680]: I0217 18:11:30.118361 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-qjgvr"] Feb 17 18:11:30 crc kubenswrapper[4680]: I0217 18:11:30.127357 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-lrlzd"] Feb 17 18:11:30 crc kubenswrapper[4680]: I0217 18:11:30.137943 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3cf6-account-create-update-rhwp9"] Feb 17 18:11:30 crc kubenswrapper[4680]: I0217 18:11:30.147327 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-fdc1-account-create-update-zp4v9"] Feb 17 18:11:30 crc kubenswrapper[4680]: I0217 18:11:30.156823 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-gh6l2"] Feb 17 18:11:30 crc kubenswrapper[4680]: I0217 18:11:30.165840 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-qjgvr"] Feb 17 18:11:30 crc kubenswrapper[4680]: I0217 18:11:30.173174 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-dbbf-account-create-update-v8ksh"] Feb 17 18:11:31 crc kubenswrapper[4680]: I0217 18:11:31.118596 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b62ed95-9eed-4e65-979a-a2d24fd25ef7" path="/var/lib/kubelet/pods/4b62ed95-9eed-4e65-979a-a2d24fd25ef7/volumes" Feb 17 18:11:31 crc kubenswrapper[4680]: I0217 18:11:31.120723 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53ece1f9-7f88-4f68-a37d-19436d21105c" path="/var/lib/kubelet/pods/53ece1f9-7f88-4f68-a37d-19436d21105c/volumes" Feb 17 18:11:31 crc kubenswrapper[4680]: I0217 18:11:31.122595 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9da1b635-d49b-4f80-9e40-a70d5f899087" path="/var/lib/kubelet/pods/9da1b635-d49b-4f80-9e40-a70d5f899087/volumes" Feb 17 18:11:31 crc kubenswrapper[4680]: I0217 18:11:31.124657 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e869c352-adf6-4ae6-9a3e-9d35179d50a0" path="/var/lib/kubelet/pods/e869c352-adf6-4ae6-9a3e-9d35179d50a0/volumes" Feb 17 18:11:31 crc kubenswrapper[4680]: I0217 18:11:31.126011 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed0cf4f6-40a2-4df6-bdf2-959b604512bc" path="/var/lib/kubelet/pods/ed0cf4f6-40a2-4df6-bdf2-959b604512bc/volumes" Feb 17 18:11:31 crc kubenswrapper[4680]: I0217 18:11:31.127652 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eddac82b-15c0-4bb8-96e4-9503ac536137" path="/var/lib/kubelet/pods/eddac82b-15c0-4bb8-96e4-9503ac536137/volumes" Feb 17 18:11:36 crc kubenswrapper[4680]: I0217 18:11:36.105225 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:11:36 crc kubenswrapper[4680]: E0217 18:11:36.105951 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:11:38 crc kubenswrapper[4680]: I0217 18:11:38.799220 4680 generic.go:334] "Generic (PLEG): container finished" podID="756e6a23-4101-4886-bfe8-cc4f12799fd7" containerID="afd1465827bfabf8da23c9dc5e37b76dda67a1e25fb229dbef02a029e97e14d0" exitCode=0 Feb 17 18:11:38 crc kubenswrapper[4680]: I0217 18:11:38.799254 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" event={"ID":"756e6a23-4101-4886-bfe8-cc4f12799fd7","Type":"ContainerDied","Data":"afd1465827bfabf8da23c9dc5e37b76dda67a1e25fb229dbef02a029e97e14d0"} Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.221568 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.318705 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fxpr\" (UniqueName: \"kubernetes.io/projected/756e6a23-4101-4886-bfe8-cc4f12799fd7-kube-api-access-8fxpr\") pod \"756e6a23-4101-4886-bfe8-cc4f12799fd7\" (UID: \"756e6a23-4101-4886-bfe8-cc4f12799fd7\") " Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.318756 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-bootstrap-combined-ca-bundle\") pod \"756e6a23-4101-4886-bfe8-cc4f12799fd7\" (UID: \"756e6a23-4101-4886-bfe8-cc4f12799fd7\") " Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.318811 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-inventory\") pod \"756e6a23-4101-4886-bfe8-cc4f12799fd7\" (UID: \"756e6a23-4101-4886-bfe8-cc4f12799fd7\") " Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.318948 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-ssh-key-openstack-edpm-ipam\") pod \"756e6a23-4101-4886-bfe8-cc4f12799fd7\" (UID: \"756e6a23-4101-4886-bfe8-cc4f12799fd7\") " Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.327616 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/756e6a23-4101-4886-bfe8-cc4f12799fd7-kube-api-access-8fxpr" (OuterVolumeSpecName: "kube-api-access-8fxpr") pod "756e6a23-4101-4886-bfe8-cc4f12799fd7" (UID: "756e6a23-4101-4886-bfe8-cc4f12799fd7"). InnerVolumeSpecName "kube-api-access-8fxpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.332605 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "756e6a23-4101-4886-bfe8-cc4f12799fd7" (UID: "756e6a23-4101-4886-bfe8-cc4f12799fd7"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.348362 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "756e6a23-4101-4886-bfe8-cc4f12799fd7" (UID: "756e6a23-4101-4886-bfe8-cc4f12799fd7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.359244 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-inventory" (OuterVolumeSpecName: "inventory") pod "756e6a23-4101-4886-bfe8-cc4f12799fd7" (UID: "756e6a23-4101-4886-bfe8-cc4f12799fd7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.423070 4680 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.423245 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.423347 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/756e6a23-4101-4886-bfe8-cc4f12799fd7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.423412 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fxpr\" (UniqueName: \"kubernetes.io/projected/756e6a23-4101-4886-bfe8-cc4f12799fd7-kube-api-access-8fxpr\") on node \"crc\" DevicePath \"\"" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.818387 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.818415 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd" event={"ID":"756e6a23-4101-4886-bfe8-cc4f12799fd7","Type":"ContainerDied","Data":"8a4ed59424c3cf11c56f8f1a8189be69a13567354a046c95b48aa58ebaed376a"} Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.818460 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a4ed59424c3cf11c56f8f1a8189be69a13567354a046c95b48aa58ebaed376a" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.913874 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg"] Feb 17 18:11:40 crc kubenswrapper[4680]: E0217 18:11:40.914294 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180c5d85-119e-4e94-b965-f6279dd01726" containerName="extract-utilities" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.914334 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="180c5d85-119e-4e94-b965-f6279dd01726" containerName="extract-utilities" Feb 17 18:11:40 crc kubenswrapper[4680]: E0217 18:11:40.914364 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180c5d85-119e-4e94-b965-f6279dd01726" containerName="extract-content" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.914373 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="180c5d85-119e-4e94-b965-f6279dd01726" containerName="extract-content" Feb 17 18:11:40 crc kubenswrapper[4680]: E0217 18:11:40.914388 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756e6a23-4101-4886-bfe8-cc4f12799fd7" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.914398 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="756e6a23-4101-4886-bfe8-cc4f12799fd7" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 18:11:40 crc kubenswrapper[4680]: E0217 18:11:40.914418 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180c5d85-119e-4e94-b965-f6279dd01726" containerName="registry-server" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.914427 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="180c5d85-119e-4e94-b965-f6279dd01726" containerName="registry-server" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.914713 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="180c5d85-119e-4e94-b965-f6279dd01726" containerName="registry-server" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.914759 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="756e6a23-4101-4886-bfe8-cc4f12799fd7" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.915500 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.917704 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.918440 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.921602 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.935035 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mhdwq" Feb 17 18:11:40 crc kubenswrapper[4680]: I0217 18:11:40.948569 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg"] Feb 17 18:11:41 crc kubenswrapper[4680]: I0217 18:11:41.048480 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc6f1259-7540-444b-997b-4bfdddbc6977-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sstxg\" (UID: \"fc6f1259-7540-444b-997b-4bfdddbc6977\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" Feb 17 18:11:41 crc kubenswrapper[4680]: I0217 18:11:41.048517 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc6f1259-7540-444b-997b-4bfdddbc6977-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sstxg\" (UID: \"fc6f1259-7540-444b-997b-4bfdddbc6977\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" Feb 17 18:11:41 crc kubenswrapper[4680]: I0217 18:11:41.048550 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppkvb\" (UniqueName: \"kubernetes.io/projected/fc6f1259-7540-444b-997b-4bfdddbc6977-kube-api-access-ppkvb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sstxg\" (UID: \"fc6f1259-7540-444b-997b-4bfdddbc6977\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" Feb 17 18:11:41 crc kubenswrapper[4680]: I0217 18:11:41.150739 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc6f1259-7540-444b-997b-4bfdddbc6977-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sstxg\" (UID: \"fc6f1259-7540-444b-997b-4bfdddbc6977\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" Feb 17 18:11:41 crc kubenswrapper[4680]: I0217 18:11:41.150808 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc6f1259-7540-444b-997b-4bfdddbc6977-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sstxg\" (UID: \"fc6f1259-7540-444b-997b-4bfdddbc6977\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" Feb 17 18:11:41 crc kubenswrapper[4680]: I0217 18:11:41.150842 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppkvb\" (UniqueName: \"kubernetes.io/projected/fc6f1259-7540-444b-997b-4bfdddbc6977-kube-api-access-ppkvb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sstxg\" (UID: \"fc6f1259-7540-444b-997b-4bfdddbc6977\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" Feb 17 18:11:41 crc kubenswrapper[4680]: I0217 18:11:41.154398 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc6f1259-7540-444b-997b-4bfdddbc6977-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sstxg\" (UID: \"fc6f1259-7540-444b-997b-4bfdddbc6977\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" Feb 17 18:11:41 crc kubenswrapper[4680]: I0217 18:11:41.158733 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc6f1259-7540-444b-997b-4bfdddbc6977-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sstxg\" (UID: \"fc6f1259-7540-444b-997b-4bfdddbc6977\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" Feb 17 18:11:41 crc kubenswrapper[4680]: I0217 18:11:41.166998 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppkvb\" (UniqueName: \"kubernetes.io/projected/fc6f1259-7540-444b-997b-4bfdddbc6977-kube-api-access-ppkvb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sstxg\" (UID: \"fc6f1259-7540-444b-997b-4bfdddbc6977\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" Feb 17 18:11:41 crc kubenswrapper[4680]: I0217 18:11:41.251955 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" Feb 17 18:11:41 crc kubenswrapper[4680]: I0217 18:11:41.814719 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg"] Feb 17 18:11:41 crc kubenswrapper[4680]: I0217 18:11:41.832075 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" event={"ID":"fc6f1259-7540-444b-997b-4bfdddbc6977","Type":"ContainerStarted","Data":"67d041579fae347281d1fdcfa83a7e2906f1dd3d87f8c697116a7cbdc33d994d"} Feb 17 18:11:42 crc kubenswrapper[4680]: I0217 18:11:42.843736 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" event={"ID":"fc6f1259-7540-444b-997b-4bfdddbc6977","Type":"ContainerStarted","Data":"23883269ad4a13f7146343963534ff8e8fe6ad89f5a7d1fd7c10e8a276526288"} Feb 17 18:11:42 crc kubenswrapper[4680]: I0217 18:11:42.860902 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" podStartSLOduration=2.427143549 podStartE2EDuration="2.86088482s" podCreationTimestamp="2026-02-17 18:11:40 +0000 UTC" firstStartedPulling="2026-02-17 18:11:41.818512163 +0000 UTC m=+1631.369410852" lastFinishedPulling="2026-02-17 18:11:42.252253444 +0000 UTC m=+1631.803152123" observedRunningTime="2026-02-17 18:11:42.858168995 +0000 UTC m=+1632.409067684" watchObservedRunningTime="2026-02-17 18:11:42.86088482 +0000 UTC m=+1632.411783499" Feb 17 18:11:50 crc kubenswrapper[4680]: I0217 18:11:50.105366 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:11:50 crc kubenswrapper[4680]: E0217 18:11:50.106052 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:11:53 crc kubenswrapper[4680]: I0217 18:11:53.011831 4680 scope.go:117] "RemoveContainer" containerID="a68d47f44d90d3fbe0655f7b2b3d037fb22919baa0981f23ff4c35d870253b2a" Feb 17 18:11:53 crc kubenswrapper[4680]: I0217 18:11:53.037336 4680 scope.go:117] "RemoveContainer" containerID="01bd8367c2435cbbb8f3ccde95991dad77fc90f9e13c320193e946ccfcabd5a0" Feb 17 18:11:53 crc kubenswrapper[4680]: I0217 18:11:53.040141 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ss6nw"] Feb 17 18:11:53 crc kubenswrapper[4680]: I0217 18:11:53.048656 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ss6nw"] Feb 17 18:11:53 crc kubenswrapper[4680]: I0217 18:11:53.069443 4680 scope.go:117] "RemoveContainer" containerID="8016522a667f8e8b07b787799ccb97b3e1695eb7f240cfeb1073717565d6a71b" Feb 17 18:11:53 crc kubenswrapper[4680]: I0217 18:11:53.088313 4680 scope.go:117] "RemoveContainer" containerID="5b9e3080c416b010f32f7323b5bcdbe18aff9977b9ae167d0d605a7ba8691869" Feb 17 18:11:53 crc kubenswrapper[4680]: I0217 18:11:53.116568 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb237184-93a1-48b5-8ae7-bfccb72ddb6c" path="/var/lib/kubelet/pods/bb237184-93a1-48b5-8ae7-bfccb72ddb6c/volumes" Feb 17 18:11:53 crc kubenswrapper[4680]: I0217 18:11:53.130955 4680 scope.go:117] "RemoveContainer" containerID="bfc024701faed933819be6dc1a433590a0abae932248ca5837911809f1c3e9e6" Feb 17 18:11:53 crc kubenswrapper[4680]: I0217 18:11:53.181249 4680 scope.go:117] "RemoveContainer" containerID="dd0f5ec7194c958992254c0c6b9aa5f38c359166404fc455741d5650f80748d2" Feb 17 18:11:53 crc kubenswrapper[4680]: I0217 18:11:53.204963 4680 scope.go:117] "RemoveContainer" containerID="ebcdc95cd812523e9fa1dc8c9cd53f6dbb38efdce728e8663d559a07fe48a056" Feb 17 18:11:53 crc kubenswrapper[4680]: I0217 18:11:53.257542 4680 scope.go:117] "RemoveContainer" containerID="b21cb920497dcd61d66da81791c8596450d1fc5a3b3b1cff10c29c4b441ea616" Feb 17 18:11:53 crc kubenswrapper[4680]: I0217 18:11:53.281121 4680 scope.go:117] "RemoveContainer" containerID="bc122f830a3abebedd821d5694834ff0a0277d01647e1697e54cc8e55d7252ea" Feb 17 18:11:53 crc kubenswrapper[4680]: I0217 18:11:53.341112 4680 scope.go:117] "RemoveContainer" containerID="ca1edc19e814ae190aedcd7c6f0821e1b9ed05931b7b4d481f2412af68a9e4f8" Feb 17 18:11:53 crc kubenswrapper[4680]: I0217 18:11:53.377710 4680 scope.go:117] "RemoveContainer" containerID="3df45c12bd4773e04d7241fc168a4763d742ff32685eea759dfc519a0190c90f" Feb 17 18:11:53 crc kubenswrapper[4680]: I0217 18:11:53.412597 4680 scope.go:117] "RemoveContainer" containerID="c95ebdc2817c85633ef631ca27fbd9a435fd838e1456007b2ec0e1e42566b8d3" Feb 17 18:11:53 crc kubenswrapper[4680]: I0217 18:11:53.430962 4680 scope.go:117] "RemoveContainer" containerID="5e5d1f2959951a5dc6e4eea7b27814a5fa0d394a43aa60b119d1e7aa6f9865af" Feb 17 18:11:56 crc kubenswrapper[4680]: I0217 18:11:56.026433 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-ck6vb"] Feb 17 18:11:56 crc kubenswrapper[4680]: I0217 18:11:56.033706 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-ncbx6"] Feb 17 18:11:56 crc kubenswrapper[4680]: I0217 18:11:56.041254 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-ck6vb"] Feb 17 18:11:56 crc kubenswrapper[4680]: I0217 18:11:56.049081 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-ncbx6"] Feb 17 18:11:57 crc kubenswrapper[4680]: I0217 18:11:57.028638 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-72aa-account-create-update-ssg2c"] Feb 17 18:11:57 crc kubenswrapper[4680]: I0217 18:11:57.039360 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-72aa-account-create-update-ssg2c"] Feb 17 18:11:57 crc kubenswrapper[4680]: I0217 18:11:57.049461 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-379c-account-create-update-phc6b"] Feb 17 18:11:57 crc kubenswrapper[4680]: I0217 18:11:57.057174 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-379c-account-create-update-phc6b"] Feb 17 18:11:57 crc kubenswrapper[4680]: I0217 18:11:57.118172 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="065d2f14-a873-4ca1-b0aa-6bde51d5aec0" path="/var/lib/kubelet/pods/065d2f14-a873-4ca1-b0aa-6bde51d5aec0/volumes" Feb 17 18:11:57 crc kubenswrapper[4680]: I0217 18:11:57.122377 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46799f27-6bac-4031-9277-7310bf49adbc" path="/var/lib/kubelet/pods/46799f27-6bac-4031-9277-7310bf49adbc/volumes" Feb 17 18:11:57 crc kubenswrapper[4680]: I0217 18:11:57.125809 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="904c1e1e-004a-4ec1-8981-2bcbbbbea7e0" path="/var/lib/kubelet/pods/904c1e1e-004a-4ec1-8981-2bcbbbbea7e0/volumes" Feb 17 18:11:57 crc kubenswrapper[4680]: I0217 18:11:57.128983 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff9aefee-2f32-417a-a2e1-4cb309fc6c19" path="/var/lib/kubelet/pods/ff9aefee-2f32-417a-a2e1-4cb309fc6c19/volumes" Feb 17 18:11:58 crc kubenswrapper[4680]: I0217 18:11:58.030507 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-988e-account-create-update-5qtck"] Feb 17 18:11:58 crc kubenswrapper[4680]: I0217 18:11:58.040295 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lvrxv"] Feb 17 18:11:58 crc kubenswrapper[4680]: I0217 18:11:58.050052 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lvrxv"] Feb 17 18:11:58 crc kubenswrapper[4680]: I0217 18:11:58.057653 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-988e-account-create-update-5qtck"] Feb 17 18:11:59 crc kubenswrapper[4680]: I0217 18:11:59.116947 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e03ce69-e84a-4e27-bdc6-48625c11da9f" path="/var/lib/kubelet/pods/8e03ce69-e84a-4e27-bdc6-48625c11da9f/volumes" Feb 17 18:11:59 crc kubenswrapper[4680]: I0217 18:11:59.120633 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0686b19-fda5-4f55-88b4-f831de276058" path="/var/lib/kubelet/pods/d0686b19-fda5-4f55-88b4-f831de276058/volumes" Feb 17 18:12:05 crc kubenswrapper[4680]: I0217 18:12:05.029780 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-dnfft"] Feb 17 18:12:05 crc kubenswrapper[4680]: I0217 18:12:05.044822 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-dnfft"] Feb 17 18:12:05 crc kubenswrapper[4680]: I0217 18:12:05.105753 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:12:05 crc kubenswrapper[4680]: E0217 18:12:05.106061 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:12:05 crc kubenswrapper[4680]: I0217 18:12:05.116713 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c1a34b3-87e0-4584-8a46-18320fe88a7e" path="/var/lib/kubelet/pods/4c1a34b3-87e0-4584-8a46-18320fe88a7e/volumes" Feb 17 18:12:07 crc kubenswrapper[4680]: I0217 18:12:07.028032 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-96c9q"] Feb 17 18:12:07 crc kubenswrapper[4680]: I0217 18:12:07.037085 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-96c9q"] Feb 17 18:12:07 crc kubenswrapper[4680]: I0217 18:12:07.120760 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce8c3e62-2beb-456a-b77b-386213f4bf98" path="/var/lib/kubelet/pods/ce8c3e62-2beb-456a-b77b-386213f4bf98/volumes" Feb 17 18:12:20 crc kubenswrapper[4680]: I0217 18:12:20.105531 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:12:20 crc kubenswrapper[4680]: E0217 18:12:20.106272 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:12:32 crc kubenswrapper[4680]: I0217 18:12:32.105525 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:12:32 crc kubenswrapper[4680]: E0217 18:12:32.108526 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:12:47 crc kubenswrapper[4680]: I0217 18:12:47.105915 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:12:47 crc kubenswrapper[4680]: E0217 18:12:47.106628 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:12:53 crc kubenswrapper[4680]: I0217 18:12:53.614090 4680 scope.go:117] "RemoveContainer" containerID="4fdcbf45c1e97f6a68666baf33df630aaca8de21af46e4058c34e571c69be445" Feb 17 18:12:53 crc kubenswrapper[4680]: I0217 18:12:53.643557 4680 scope.go:117] "RemoveContainer" containerID="8c0b9330037ebc4b13964cceaf9af9d288a48b9fb5a2e07c04580bc9834cfd37" Feb 17 18:12:53 crc kubenswrapper[4680]: I0217 18:12:53.682670 4680 scope.go:117] "RemoveContainer" containerID="6fb39025c05e3edcf45598fd9c8259e9e07bfa52f700036a907cc2c3b7c577cc" Feb 17 18:12:53 crc kubenswrapper[4680]: I0217 18:12:53.741237 4680 scope.go:117] "RemoveContainer" containerID="f85a3cba93c57c0aaae7f3708c676699085b6811c2404eaf8b13976d7b9c6a84" Feb 17 18:12:53 crc kubenswrapper[4680]: I0217 18:12:53.774822 4680 scope.go:117] "RemoveContainer" containerID="7bb236f312bb3bfa1c8f9aafa99550ae6f38fbccb021cd18207627d42a061fd4" Feb 17 18:12:53 crc kubenswrapper[4680]: I0217 18:12:53.813086 4680 scope.go:117] "RemoveContainer" containerID="2b5fc39d7b02d82f2a0866709f1df8f7ca2cf377367c88bdf8a4438d06da3987" Feb 17 18:12:53 crc kubenswrapper[4680]: I0217 18:12:53.855277 4680 scope.go:117] "RemoveContainer" containerID="de7b12c3fcef1c67e78a398deafd767a418da626e6641de0fe2801c8a01f696e" Feb 17 18:12:53 crc kubenswrapper[4680]: I0217 18:12:53.874880 4680 scope.go:117] "RemoveContainer" containerID="484cf52dc7dcabfd9107b439da5050e2c2f67dc4e690aa8bb04062303583bf71" Feb 17 18:12:53 crc kubenswrapper[4680]: I0217 18:12:53.893600 4680 scope.go:117] "RemoveContainer" containerID="bf3202a551f7329f30ec3ef58caf4dfe7277da5c16d3eb54a3debf569b8b910f" Feb 17 18:13:02 crc kubenswrapper[4680]: I0217 18:13:02.105664 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:13:02 crc kubenswrapper[4680]: E0217 18:13:02.106346 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:13:03 crc kubenswrapper[4680]: I0217 18:13:03.037297 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dgb89"] Feb 17 18:13:03 crc kubenswrapper[4680]: I0217 18:13:03.046240 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dgb89"] Feb 17 18:13:03 crc kubenswrapper[4680]: I0217 18:13:03.114611 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbcf2418-ecf2-4c69-9129-88ebe5cab8a2" path="/var/lib/kubelet/pods/bbcf2418-ecf2-4c69-9129-88ebe5cab8a2/volumes" Feb 17 18:13:05 crc kubenswrapper[4680]: I0217 18:13:05.028519 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-76rb8"] Feb 17 18:13:05 crc kubenswrapper[4680]: I0217 18:13:05.037998 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-76rb8"] Feb 17 18:13:05 crc kubenswrapper[4680]: I0217 18:13:05.117004 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c53952-8df2-4f30-ae29-5778822cae38" path="/var/lib/kubelet/pods/b2c53952-8df2-4f30-ae29-5778822cae38/volumes" Feb 17 18:13:12 crc kubenswrapper[4680]: I0217 18:13:12.038021 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-477hc"] Feb 17 18:13:12 crc kubenswrapper[4680]: I0217 18:13:12.046001 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-bn2nz"] Feb 17 18:13:12 crc kubenswrapper[4680]: I0217 18:13:12.053423 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-477hc"] Feb 17 18:13:12 crc kubenswrapper[4680]: I0217 18:13:12.060458 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-bn2nz"] Feb 17 18:13:13 crc kubenswrapper[4680]: I0217 18:13:13.120603 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a49d620c-c0c2-41cb-b458-cb67351334ad" path="/var/lib/kubelet/pods/a49d620c-c0c2-41cb-b458-cb67351334ad/volumes" Feb 17 18:13:13 crc kubenswrapper[4680]: I0217 18:13:13.123433 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dce4bc8a-e4f7-4716-abc6-cb70c238f4cc" path="/var/lib/kubelet/pods/dce4bc8a-e4f7-4716-abc6-cb70c238f4cc/volumes" Feb 17 18:13:16 crc kubenswrapper[4680]: I0217 18:13:16.105941 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:13:16 crc kubenswrapper[4680]: E0217 18:13:16.106711 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:13:24 crc kubenswrapper[4680]: I0217 18:13:24.031555 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-9458w"] Feb 17 18:13:24 crc kubenswrapper[4680]: I0217 18:13:24.038985 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-9458w"] Feb 17 18:13:25 crc kubenswrapper[4680]: I0217 18:13:25.117497 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f869168f-8063-46a6-b574-d8be569ac733" path="/var/lib/kubelet/pods/f869168f-8063-46a6-b574-d8be569ac733/volumes" Feb 17 18:13:29 crc kubenswrapper[4680]: I0217 18:13:29.105597 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:13:29 crc kubenswrapper[4680]: E0217 18:13:29.106739 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:13:42 crc kubenswrapper[4680]: I0217 18:13:42.105010 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:13:42 crc kubenswrapper[4680]: E0217 18:13:42.105768 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:13:42 crc kubenswrapper[4680]: I0217 18:13:42.818657 4680 generic.go:334] "Generic (PLEG): container finished" podID="fc6f1259-7540-444b-997b-4bfdddbc6977" containerID="23883269ad4a13f7146343963534ff8e8fe6ad89f5a7d1fd7c10e8a276526288" exitCode=0 Feb 17 18:13:42 crc kubenswrapper[4680]: I0217 18:13:42.818729 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" event={"ID":"fc6f1259-7540-444b-997b-4bfdddbc6977","Type":"ContainerDied","Data":"23883269ad4a13f7146343963534ff8e8fe6ad89f5a7d1fd7c10e8a276526288"} Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.190225 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.331860 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc6f1259-7540-444b-997b-4bfdddbc6977-inventory\") pod \"fc6f1259-7540-444b-997b-4bfdddbc6977\" (UID: \"fc6f1259-7540-444b-997b-4bfdddbc6977\") " Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.331925 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc6f1259-7540-444b-997b-4bfdddbc6977-ssh-key-openstack-edpm-ipam\") pod \"fc6f1259-7540-444b-997b-4bfdddbc6977\" (UID: \"fc6f1259-7540-444b-997b-4bfdddbc6977\") " Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.331961 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppkvb\" (UniqueName: \"kubernetes.io/projected/fc6f1259-7540-444b-997b-4bfdddbc6977-kube-api-access-ppkvb\") pod \"fc6f1259-7540-444b-997b-4bfdddbc6977\" (UID: \"fc6f1259-7540-444b-997b-4bfdddbc6977\") " Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.337630 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc6f1259-7540-444b-997b-4bfdddbc6977-kube-api-access-ppkvb" (OuterVolumeSpecName: "kube-api-access-ppkvb") pod "fc6f1259-7540-444b-997b-4bfdddbc6977" (UID: "fc6f1259-7540-444b-997b-4bfdddbc6977"). InnerVolumeSpecName "kube-api-access-ppkvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.362603 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc6f1259-7540-444b-997b-4bfdddbc6977-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fc6f1259-7540-444b-997b-4bfdddbc6977" (UID: "fc6f1259-7540-444b-997b-4bfdddbc6977"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.367412 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc6f1259-7540-444b-997b-4bfdddbc6977-inventory" (OuterVolumeSpecName: "inventory") pod "fc6f1259-7540-444b-997b-4bfdddbc6977" (UID: "fc6f1259-7540-444b-997b-4bfdddbc6977"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.434095 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc6f1259-7540-444b-997b-4bfdddbc6977-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.434126 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc6f1259-7540-444b-997b-4bfdddbc6977-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.434138 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppkvb\" (UniqueName: \"kubernetes.io/projected/fc6f1259-7540-444b-997b-4bfdddbc6977-kube-api-access-ppkvb\") on node \"crc\" DevicePath \"\"" Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.834592 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" event={"ID":"fc6f1259-7540-444b-997b-4bfdddbc6977","Type":"ContainerDied","Data":"67d041579fae347281d1fdcfa83a7e2906f1dd3d87f8c697116a7cbdc33d994d"} Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.834904 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67d041579fae347281d1fdcfa83a7e2906f1dd3d87f8c697116a7cbdc33d994d" Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.834655 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sstxg" Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.943493 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf"] Feb 17 18:13:44 crc kubenswrapper[4680]: E0217 18:13:44.943870 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc6f1259-7540-444b-997b-4bfdddbc6977" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.943887 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc6f1259-7540-444b-997b-4bfdddbc6977" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.944069 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc6f1259-7540-444b-997b-4bfdddbc6977" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.944747 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.949529 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.949593 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mhdwq" Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.949610 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.949543 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 18:13:44 crc kubenswrapper[4680]: I0217 18:13:44.961997 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf"] Feb 17 18:13:45 crc kubenswrapper[4680]: I0217 18:13:45.043731 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05d9615f-35c4-4fcc-ac20-b798ba6abc72-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf\" (UID: \"05d9615f-35c4-4fcc-ac20-b798ba6abc72\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" Feb 17 18:13:45 crc kubenswrapper[4680]: I0217 18:13:45.043787 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05d9615f-35c4-4fcc-ac20-b798ba6abc72-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf\" (UID: \"05d9615f-35c4-4fcc-ac20-b798ba6abc72\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" Feb 17 18:13:45 crc kubenswrapper[4680]: I0217 18:13:45.043901 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlc97\" (UniqueName: \"kubernetes.io/projected/05d9615f-35c4-4fcc-ac20-b798ba6abc72-kube-api-access-qlc97\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf\" (UID: \"05d9615f-35c4-4fcc-ac20-b798ba6abc72\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" Feb 17 18:13:45 crc kubenswrapper[4680]: I0217 18:13:45.146092 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05d9615f-35c4-4fcc-ac20-b798ba6abc72-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf\" (UID: \"05d9615f-35c4-4fcc-ac20-b798ba6abc72\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" Feb 17 18:13:45 crc kubenswrapper[4680]: I0217 18:13:45.146153 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05d9615f-35c4-4fcc-ac20-b798ba6abc72-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf\" (UID: \"05d9615f-35c4-4fcc-ac20-b798ba6abc72\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" Feb 17 18:13:45 crc kubenswrapper[4680]: I0217 18:13:45.146219 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlc97\" (UniqueName: \"kubernetes.io/projected/05d9615f-35c4-4fcc-ac20-b798ba6abc72-kube-api-access-qlc97\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf\" (UID: \"05d9615f-35c4-4fcc-ac20-b798ba6abc72\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" Feb 17 18:13:45 crc kubenswrapper[4680]: I0217 18:13:45.153233 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05d9615f-35c4-4fcc-ac20-b798ba6abc72-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf\" (UID: \"05d9615f-35c4-4fcc-ac20-b798ba6abc72\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" Feb 17 18:13:45 crc kubenswrapper[4680]: I0217 18:13:45.159122 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05d9615f-35c4-4fcc-ac20-b798ba6abc72-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf\" (UID: \"05d9615f-35c4-4fcc-ac20-b798ba6abc72\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" Feb 17 18:13:45 crc kubenswrapper[4680]: I0217 18:13:45.166230 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlc97\" (UniqueName: \"kubernetes.io/projected/05d9615f-35c4-4fcc-ac20-b798ba6abc72-kube-api-access-qlc97\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf\" (UID: \"05d9615f-35c4-4fcc-ac20-b798ba6abc72\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" Feb 17 18:13:45 crc kubenswrapper[4680]: I0217 18:13:45.265569 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" Feb 17 18:13:45 crc kubenswrapper[4680]: I0217 18:13:45.755228 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf"] Feb 17 18:13:45 crc kubenswrapper[4680]: I0217 18:13:45.843821 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" event={"ID":"05d9615f-35c4-4fcc-ac20-b798ba6abc72","Type":"ContainerStarted","Data":"a210c46e5a17271d4c940aa9c504494c3ae0412bb4e51b278e5dabf22262ff78"} Feb 17 18:13:46 crc kubenswrapper[4680]: I0217 18:13:46.873231 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" event={"ID":"05d9615f-35c4-4fcc-ac20-b798ba6abc72","Type":"ContainerStarted","Data":"c60a31aa88fcfe411ff14ba14f8f4ac07c780925feb4d0013efc56233d3b35f5"} Feb 17 18:13:46 crc kubenswrapper[4680]: I0217 18:13:46.905406 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" podStartSLOduration=2.480387727 podStartE2EDuration="2.905383812s" podCreationTimestamp="2026-02-17 18:13:44 +0000 UTC" firstStartedPulling="2026-02-17 18:13:45.763056038 +0000 UTC m=+1755.313954717" lastFinishedPulling="2026-02-17 18:13:46.188052123 +0000 UTC m=+1755.738950802" observedRunningTime="2026-02-17 18:13:46.891691542 +0000 UTC m=+1756.442590231" watchObservedRunningTime="2026-02-17 18:13:46.905383812 +0000 UTC m=+1756.456282501" Feb 17 18:13:54 crc kubenswrapper[4680]: I0217 18:13:54.069218 4680 scope.go:117] "RemoveContainer" containerID="402a0262b690fade24798da881d746d742311d70295f6b4d9909081f63083658" Feb 17 18:13:54 crc kubenswrapper[4680]: I0217 18:13:54.100138 4680 scope.go:117] "RemoveContainer" containerID="63c18c1503d5044c8df349e56802f1c81e24b1744c0c3f36b09a08879a1536f8" Feb 17 18:13:54 crc kubenswrapper[4680]: I0217 18:13:54.138923 4680 scope.go:117] "RemoveContainer" containerID="6f25aa11e0c3556a1b98a0ccdf89d777fde4b0e9166e86c6f664f7ceb38d94ae" Feb 17 18:13:54 crc kubenswrapper[4680]: I0217 18:13:54.177704 4680 scope.go:117] "RemoveContainer" containerID="91facdc44387a65dcf29e667a062812504fbfdafa8c48e06e491ef94d63b5540" Feb 17 18:13:54 crc kubenswrapper[4680]: I0217 18:13:54.216728 4680 scope.go:117] "RemoveContainer" containerID="78b364685af07b8c110b1575898f77e9639129c3c8d218360fa60b41101601eb" Feb 17 18:13:54 crc kubenswrapper[4680]: I0217 18:13:54.243573 4680 scope.go:117] "RemoveContainer" containerID="e4b9ce719834371df46b902b4450b26c5d26df2365c01637f2d138fbbfe0ac3d" Feb 17 18:13:54 crc kubenswrapper[4680]: I0217 18:13:54.277422 4680 scope.go:117] "RemoveContainer" containerID="f115b6f2712360fedf2655bc1a7af5bb3d800f96bce75a80dcb9d60c9dc3edf5" Feb 17 18:13:55 crc kubenswrapper[4680]: I0217 18:13:55.105696 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:13:55 crc kubenswrapper[4680]: E0217 18:13:55.106299 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:14:07 crc kubenswrapper[4680]: I0217 18:14:07.105349 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:14:07 crc kubenswrapper[4680]: E0217 18:14:07.106135 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:14:14 crc kubenswrapper[4680]: I0217 18:14:14.040633 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-7ghqn"] Feb 17 18:14:14 crc kubenswrapper[4680]: I0217 18:14:14.049422 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-7ghqn"] Feb 17 18:14:15 crc kubenswrapper[4680]: I0217 18:14:15.037608 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-d88b-account-create-update-t8n2f"] Feb 17 18:14:15 crc kubenswrapper[4680]: I0217 18:14:15.055173 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8c2a-account-create-update-xfl7v"] Feb 17 18:14:15 crc kubenswrapper[4680]: I0217 18:14:15.067127 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8c2a-account-create-update-xfl7v"] Feb 17 18:14:15 crc kubenswrapper[4680]: I0217 18:14:15.087944 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-d88b-account-create-update-t8n2f"] Feb 17 18:14:15 crc kubenswrapper[4680]: I0217 18:14:15.097399 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2ae5-account-create-update-m7zrl"] Feb 17 18:14:15 crc kubenswrapper[4680]: I0217 18:14:15.116963 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd9bf62-1912-4923-a80f-5f4c5ba9d592" path="/var/lib/kubelet/pods/6fd9bf62-1912-4923-a80f-5f4c5ba9d592/volumes" Feb 17 18:14:15 crc kubenswrapper[4680]: I0217 18:14:15.117852 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c975a37-6c0c-4126-9764-7e434c09cde1" path="/var/lib/kubelet/pods/8c975a37-6c0c-4126-9764-7e434c09cde1/volumes" Feb 17 18:14:15 crc kubenswrapper[4680]: I0217 18:14:15.118593 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c035dac9-ced2-4d11-98fb-55b14332a2ac" path="/var/lib/kubelet/pods/c035dac9-ced2-4d11-98fb-55b14332a2ac/volumes" Feb 17 18:14:15 crc kubenswrapper[4680]: I0217 18:14:15.119196 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2ae5-account-create-update-m7zrl"] Feb 17 18:14:15 crc kubenswrapper[4680]: I0217 18:14:15.119288 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-hdz75"] Feb 17 18:14:15 crc kubenswrapper[4680]: I0217 18:14:15.124177 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-hdz75"] Feb 17 18:14:15 crc kubenswrapper[4680]: I0217 18:14:15.133156 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-rkvht"] Feb 17 18:14:15 crc kubenswrapper[4680]: I0217 18:14:15.139663 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-rkvht"] Feb 17 18:14:17 crc kubenswrapper[4680]: I0217 18:14:17.116954 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a32553e-b714-453e-9686-a5d72cd3e3fd" path="/var/lib/kubelet/pods/1a32553e-b714-453e-9686-a5d72cd3e3fd/volumes" Feb 17 18:14:17 crc kubenswrapper[4680]: I0217 18:14:17.118140 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47c0ebcb-0014-4cc0-b660-536f4392d4d6" path="/var/lib/kubelet/pods/47c0ebcb-0014-4cc0-b660-536f4392d4d6/volumes" Feb 17 18:14:17 crc kubenswrapper[4680]: I0217 18:14:17.118946 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9534f780-22b4-4288-b8d5-3d80336a0e55" path="/var/lib/kubelet/pods/9534f780-22b4-4288-b8d5-3d80336a0e55/volumes" Feb 17 18:14:21 crc kubenswrapper[4680]: I0217 18:14:21.112375 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:14:21 crc kubenswrapper[4680]: E0217 18:14:21.113143 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:14:34 crc kubenswrapper[4680]: I0217 18:14:34.104874 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:14:34 crc kubenswrapper[4680]: E0217 18:14:34.105738 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:14:47 crc kubenswrapper[4680]: I0217 18:14:47.105833 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:14:47 crc kubenswrapper[4680]: I0217 18:14:47.361838 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"cd71dcda81999845ebec67c7f0fbf1d96faca4f692411c4e3318e7b0e0daed8f"} Feb 17 18:14:54 crc kubenswrapper[4680]: I0217 18:14:54.431451 4680 scope.go:117] "RemoveContainer" containerID="983a25ec9a3dbdfe8e5f42b33b3ffcb104560d7399ebd08b638d0acadfd471e5" Feb 17 18:14:54 crc kubenswrapper[4680]: I0217 18:14:54.473225 4680 scope.go:117] "RemoveContainer" containerID="c3aacbbfcb27ed9154b954cf371ceba9ed735c1b3573e9716751f20193f5bcd8" Feb 17 18:14:54 crc kubenswrapper[4680]: I0217 18:14:54.512660 4680 scope.go:117] "RemoveContainer" containerID="b3cafea384aaff83bdde53a1768a7773e54b25f1d033188476f9e0a75ce5ef65" Feb 17 18:14:54 crc kubenswrapper[4680]: I0217 18:14:54.564999 4680 scope.go:117] "RemoveContainer" containerID="3d980b326af1db60c2d541bf447cf19cd73ebe345d411697060d326546adb18c" Feb 17 18:14:54 crc kubenswrapper[4680]: I0217 18:14:54.622870 4680 scope.go:117] "RemoveContainer" containerID="accd89a62d0663067098553fd55a8f66d68d5c6f0d94a73c5732fe462541e9a2" Feb 17 18:14:54 crc kubenswrapper[4680]: I0217 18:14:54.663772 4680 scope.go:117] "RemoveContainer" containerID="944c720d40e5ed54055d7d1e89f0a31214dba259a34b6ac5b0490d36fad5415b" Feb 17 18:14:57 crc kubenswrapper[4680]: I0217 18:14:57.051016 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9kszg"] Feb 17 18:14:57 crc kubenswrapper[4680]: I0217 18:14:57.059396 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9kszg"] Feb 17 18:14:57 crc kubenswrapper[4680]: I0217 18:14:57.117691 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf000f9-0745-4550-84cc-2d2d1d8520fd" path="/var/lib/kubelet/pods/0cf000f9-0745-4550-84cc-2d2d1d8520fd/volumes" Feb 17 18:14:57 crc kubenswrapper[4680]: I0217 18:14:57.445711 4680 generic.go:334] "Generic (PLEG): container finished" podID="05d9615f-35c4-4fcc-ac20-b798ba6abc72" containerID="c60a31aa88fcfe411ff14ba14f8f4ac07c780925feb4d0013efc56233d3b35f5" exitCode=0 Feb 17 18:14:57 crc kubenswrapper[4680]: I0217 18:14:57.445852 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" event={"ID":"05d9615f-35c4-4fcc-ac20-b798ba6abc72","Type":"ContainerDied","Data":"c60a31aa88fcfe411ff14ba14f8f4ac07c780925feb4d0013efc56233d3b35f5"} Feb 17 18:14:58 crc kubenswrapper[4680]: I0217 18:14:58.899064 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.069274 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlc97\" (UniqueName: \"kubernetes.io/projected/05d9615f-35c4-4fcc-ac20-b798ba6abc72-kube-api-access-qlc97\") pod \"05d9615f-35c4-4fcc-ac20-b798ba6abc72\" (UID: \"05d9615f-35c4-4fcc-ac20-b798ba6abc72\") " Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.070085 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05d9615f-35c4-4fcc-ac20-b798ba6abc72-inventory\") pod \"05d9615f-35c4-4fcc-ac20-b798ba6abc72\" (UID: \"05d9615f-35c4-4fcc-ac20-b798ba6abc72\") " Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.070270 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05d9615f-35c4-4fcc-ac20-b798ba6abc72-ssh-key-openstack-edpm-ipam\") pod \"05d9615f-35c4-4fcc-ac20-b798ba6abc72\" (UID: \"05d9615f-35c4-4fcc-ac20-b798ba6abc72\") " Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.077309 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05d9615f-35c4-4fcc-ac20-b798ba6abc72-kube-api-access-qlc97" (OuterVolumeSpecName: "kube-api-access-qlc97") pod "05d9615f-35c4-4fcc-ac20-b798ba6abc72" (UID: "05d9615f-35c4-4fcc-ac20-b798ba6abc72"). InnerVolumeSpecName "kube-api-access-qlc97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.102720 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05d9615f-35c4-4fcc-ac20-b798ba6abc72-inventory" (OuterVolumeSpecName: "inventory") pod "05d9615f-35c4-4fcc-ac20-b798ba6abc72" (UID: "05d9615f-35c4-4fcc-ac20-b798ba6abc72"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.109120 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05d9615f-35c4-4fcc-ac20-b798ba6abc72-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "05d9615f-35c4-4fcc-ac20-b798ba6abc72" (UID: "05d9615f-35c4-4fcc-ac20-b798ba6abc72"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.172507 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05d9615f-35c4-4fcc-ac20-b798ba6abc72-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.172535 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlc97\" (UniqueName: \"kubernetes.io/projected/05d9615f-35c4-4fcc-ac20-b798ba6abc72-kube-api-access-qlc97\") on node \"crc\" DevicePath \"\"" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.172545 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05d9615f-35c4-4fcc-ac20-b798ba6abc72-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.470385 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" event={"ID":"05d9615f-35c4-4fcc-ac20-b798ba6abc72","Type":"ContainerDied","Data":"a210c46e5a17271d4c940aa9c504494c3ae0412bb4e51b278e5dabf22262ff78"} Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.470987 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a210c46e5a17271d4c940aa9c504494c3ae0412bb4e51b278e5dabf22262ff78" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.470524 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.660930 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm"] Feb 17 18:14:59 crc kubenswrapper[4680]: E0217 18:14:59.661351 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05d9615f-35c4-4fcc-ac20-b798ba6abc72" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.661373 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="05d9615f-35c4-4fcc-ac20-b798ba6abc72" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.661601 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="05d9615f-35c4-4fcc-ac20-b798ba6abc72" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.662285 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.715862 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.715876 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.716017 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mhdwq" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.716059 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.721702 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm"] Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.815114 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm\" (UID: \"f8db14b6-26ac-4b47-a820-ff6f4ce152dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.815176 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgm24\" (UniqueName: \"kubernetes.io/projected/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-kube-api-access-cgm24\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm\" (UID: \"f8db14b6-26ac-4b47-a820-ff6f4ce152dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.815404 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm\" (UID: \"f8db14b6-26ac-4b47-a820-ff6f4ce152dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.916988 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm\" (UID: \"f8db14b6-26ac-4b47-a820-ff6f4ce152dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.917054 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgm24\" (UniqueName: \"kubernetes.io/projected/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-kube-api-access-cgm24\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm\" (UID: \"f8db14b6-26ac-4b47-a820-ff6f4ce152dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.917144 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm\" (UID: \"f8db14b6-26ac-4b47-a820-ff6f4ce152dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.921850 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm\" (UID: \"f8db14b6-26ac-4b47-a820-ff6f4ce152dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.926946 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm\" (UID: \"f8db14b6-26ac-4b47-a820-ff6f4ce152dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" Feb 17 18:14:59 crc kubenswrapper[4680]: I0217 18:14:59.933649 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgm24\" (UniqueName: \"kubernetes.io/projected/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-kube-api-access-cgm24\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm\" (UID: \"f8db14b6-26ac-4b47-a820-ff6f4ce152dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.032006 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.144362 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc"] Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.145951 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.148357 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.148649 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.179821 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc"] Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.323926 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-config-volume\") pod \"collect-profiles-29522535-s5qwc\" (UID: \"dd5f3080-2ae7-4012-8230-fc007a1a1b8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.324075 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-secret-volume\") pod \"collect-profiles-29522535-s5qwc\" (UID: \"dd5f3080-2ae7-4012-8230-fc007a1a1b8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.324115 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmhfj\" (UniqueName: \"kubernetes.io/projected/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-kube-api-access-hmhfj\") pod \"collect-profiles-29522535-s5qwc\" (UID: \"dd5f3080-2ae7-4012-8230-fc007a1a1b8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.425534 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-config-volume\") pod \"collect-profiles-29522535-s5qwc\" (UID: \"dd5f3080-2ae7-4012-8230-fc007a1a1b8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.425686 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-secret-volume\") pod \"collect-profiles-29522535-s5qwc\" (UID: \"dd5f3080-2ae7-4012-8230-fc007a1a1b8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.425734 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmhfj\" (UniqueName: \"kubernetes.io/projected/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-kube-api-access-hmhfj\") pod \"collect-profiles-29522535-s5qwc\" (UID: \"dd5f3080-2ae7-4012-8230-fc007a1a1b8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.426511 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-config-volume\") pod \"collect-profiles-29522535-s5qwc\" (UID: \"dd5f3080-2ae7-4012-8230-fc007a1a1b8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.439087 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-secret-volume\") pod \"collect-profiles-29522535-s5qwc\" (UID: \"dd5f3080-2ae7-4012-8230-fc007a1a1b8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.444280 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmhfj\" (UniqueName: \"kubernetes.io/projected/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-kube-api-access-hmhfj\") pod \"collect-profiles-29522535-s5qwc\" (UID: \"dd5f3080-2ae7-4012-8230-fc007a1a1b8f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.493930 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.636688 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm"] Feb 17 18:15:00 crc kubenswrapper[4680]: I0217 18:15:00.966587 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc"] Feb 17 18:15:01 crc kubenswrapper[4680]: I0217 18:15:01.491631 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" event={"ID":"f8db14b6-26ac-4b47-a820-ff6f4ce152dd","Type":"ContainerStarted","Data":"990754fbee91a159e0c81ff2f453f53e6bde45c00db33dd6cd946dceee5bac1c"} Feb 17 18:15:01 crc kubenswrapper[4680]: I0217 18:15:01.491913 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" event={"ID":"f8db14b6-26ac-4b47-a820-ff6f4ce152dd","Type":"ContainerStarted","Data":"a483ed2c58d75ebe5e5815408d75de57a2b51b0c7856dea0770e4b7e0f4f0731"} Feb 17 18:15:01 crc kubenswrapper[4680]: I0217 18:15:01.495595 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" event={"ID":"dd5f3080-2ae7-4012-8230-fc007a1a1b8f","Type":"ContainerStarted","Data":"c47a0d8b2b835691675b0d91faf02cafbd3af9433ad65964ec55f21bdbad9997"} Feb 17 18:15:01 crc kubenswrapper[4680]: I0217 18:15:01.495642 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" event={"ID":"dd5f3080-2ae7-4012-8230-fc007a1a1b8f","Type":"ContainerStarted","Data":"6869c7c53b58e106158b09f7574b05d6cc9d15c091836a3f7dfbb8bcab9962ef"} Feb 17 18:15:01 crc kubenswrapper[4680]: I0217 18:15:01.516798 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" podStartSLOduration=1.995161196 podStartE2EDuration="2.516769543s" podCreationTimestamp="2026-02-17 18:14:59 +0000 UTC" firstStartedPulling="2026-02-17 18:15:00.645464193 +0000 UTC m=+1830.196362872" lastFinishedPulling="2026-02-17 18:15:01.16707254 +0000 UTC m=+1830.717971219" observedRunningTime="2026-02-17 18:15:01.505927952 +0000 UTC m=+1831.056826631" watchObservedRunningTime="2026-02-17 18:15:01.516769543 +0000 UTC m=+1831.067668222" Feb 17 18:15:01 crc kubenswrapper[4680]: I0217 18:15:01.523509 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" podStartSLOduration=1.5234918830000002 podStartE2EDuration="1.523491883s" podCreationTimestamp="2026-02-17 18:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:15:01.520640164 +0000 UTC m=+1831.071538843" watchObservedRunningTime="2026-02-17 18:15:01.523491883 +0000 UTC m=+1831.074390562" Feb 17 18:15:02 crc kubenswrapper[4680]: I0217 18:15:02.506444 4680 generic.go:334] "Generic (PLEG): container finished" podID="dd5f3080-2ae7-4012-8230-fc007a1a1b8f" containerID="c47a0d8b2b835691675b0d91faf02cafbd3af9433ad65964ec55f21bdbad9997" exitCode=0 Feb 17 18:15:02 crc kubenswrapper[4680]: I0217 18:15:02.507774 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" event={"ID":"dd5f3080-2ae7-4012-8230-fc007a1a1b8f","Type":"ContainerDied","Data":"c47a0d8b2b835691675b0d91faf02cafbd3af9433ad65964ec55f21bdbad9997"} Feb 17 18:15:03 crc kubenswrapper[4680]: I0217 18:15:03.824983 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" Feb 17 18:15:03 crc kubenswrapper[4680]: I0217 18:15:03.889243 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmhfj\" (UniqueName: \"kubernetes.io/projected/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-kube-api-access-hmhfj\") pod \"dd5f3080-2ae7-4012-8230-fc007a1a1b8f\" (UID: \"dd5f3080-2ae7-4012-8230-fc007a1a1b8f\") " Feb 17 18:15:03 crc kubenswrapper[4680]: I0217 18:15:03.889404 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-config-volume\") pod \"dd5f3080-2ae7-4012-8230-fc007a1a1b8f\" (UID: \"dd5f3080-2ae7-4012-8230-fc007a1a1b8f\") " Feb 17 18:15:03 crc kubenswrapper[4680]: I0217 18:15:03.889513 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-secret-volume\") pod \"dd5f3080-2ae7-4012-8230-fc007a1a1b8f\" (UID: \"dd5f3080-2ae7-4012-8230-fc007a1a1b8f\") " Feb 17 18:15:03 crc kubenswrapper[4680]: I0217 18:15:03.892833 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-config-volume" (OuterVolumeSpecName: "config-volume") pod "dd5f3080-2ae7-4012-8230-fc007a1a1b8f" (UID: "dd5f3080-2ae7-4012-8230-fc007a1a1b8f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:15:03 crc kubenswrapper[4680]: I0217 18:15:03.895619 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dd5f3080-2ae7-4012-8230-fc007a1a1b8f" (UID: "dd5f3080-2ae7-4012-8230-fc007a1a1b8f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:15:03 crc kubenswrapper[4680]: I0217 18:15:03.896888 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-kube-api-access-hmhfj" (OuterVolumeSpecName: "kube-api-access-hmhfj") pod "dd5f3080-2ae7-4012-8230-fc007a1a1b8f" (UID: "dd5f3080-2ae7-4012-8230-fc007a1a1b8f"). InnerVolumeSpecName "kube-api-access-hmhfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:15:03 crc kubenswrapper[4680]: I0217 18:15:03.991359 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 18:15:03 crc kubenswrapper[4680]: I0217 18:15:03.991399 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmhfj\" (UniqueName: \"kubernetes.io/projected/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-kube-api-access-hmhfj\") on node \"crc\" DevicePath \"\"" Feb 17 18:15:03 crc kubenswrapper[4680]: I0217 18:15:03.991415 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd5f3080-2ae7-4012-8230-fc007a1a1b8f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 18:15:04 crc kubenswrapper[4680]: I0217 18:15:04.525246 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" event={"ID":"dd5f3080-2ae7-4012-8230-fc007a1a1b8f","Type":"ContainerDied","Data":"6869c7c53b58e106158b09f7574b05d6cc9d15c091836a3f7dfbb8bcab9962ef"} Feb 17 18:15:04 crc kubenswrapper[4680]: I0217 18:15:04.525286 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6869c7c53b58e106158b09f7574b05d6cc9d15c091836a3f7dfbb8bcab9962ef" Feb 17 18:15:04 crc kubenswrapper[4680]: I0217 18:15:04.525305 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc" Feb 17 18:15:06 crc kubenswrapper[4680]: I0217 18:15:06.549613 4680 generic.go:334] "Generic (PLEG): container finished" podID="f8db14b6-26ac-4b47-a820-ff6f4ce152dd" containerID="990754fbee91a159e0c81ff2f453f53e6bde45c00db33dd6cd946dceee5bac1c" exitCode=0 Feb 17 18:15:06 crc kubenswrapper[4680]: I0217 18:15:06.550563 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" event={"ID":"f8db14b6-26ac-4b47-a820-ff6f4ce152dd","Type":"ContainerDied","Data":"990754fbee91a159e0c81ff2f453f53e6bde45c00db33dd6cd946dceee5bac1c"} Feb 17 18:15:07 crc kubenswrapper[4680]: I0217 18:15:07.928117 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" Feb 17 18:15:07 crc kubenswrapper[4680]: I0217 18:15:07.995705 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgm24\" (UniqueName: \"kubernetes.io/projected/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-kube-api-access-cgm24\") pod \"f8db14b6-26ac-4b47-a820-ff6f4ce152dd\" (UID: \"f8db14b6-26ac-4b47-a820-ff6f4ce152dd\") " Feb 17 18:15:07 crc kubenswrapper[4680]: I0217 18:15:07.995774 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-inventory\") pod \"f8db14b6-26ac-4b47-a820-ff6f4ce152dd\" (UID: \"f8db14b6-26ac-4b47-a820-ff6f4ce152dd\") " Feb 17 18:15:07 crc kubenswrapper[4680]: I0217 18:15:07.996013 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-ssh-key-openstack-edpm-ipam\") pod \"f8db14b6-26ac-4b47-a820-ff6f4ce152dd\" (UID: \"f8db14b6-26ac-4b47-a820-ff6f4ce152dd\") " Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.001884 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-kube-api-access-cgm24" (OuterVolumeSpecName: "kube-api-access-cgm24") pod "f8db14b6-26ac-4b47-a820-ff6f4ce152dd" (UID: "f8db14b6-26ac-4b47-a820-ff6f4ce152dd"). InnerVolumeSpecName "kube-api-access-cgm24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.023729 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-inventory" (OuterVolumeSpecName: "inventory") pod "f8db14b6-26ac-4b47-a820-ff6f4ce152dd" (UID: "f8db14b6-26ac-4b47-a820-ff6f4ce152dd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.024411 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f8db14b6-26ac-4b47-a820-ff6f4ce152dd" (UID: "f8db14b6-26ac-4b47-a820-ff6f4ce152dd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.098691 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.098729 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgm24\" (UniqueName: \"kubernetes.io/projected/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-kube-api-access-cgm24\") on node \"crc\" DevicePath \"\"" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.098742 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8db14b6-26ac-4b47-a820-ff6f4ce152dd-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.568037 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" event={"ID":"f8db14b6-26ac-4b47-a820-ff6f4ce152dd","Type":"ContainerDied","Data":"a483ed2c58d75ebe5e5815408d75de57a2b51b0c7856dea0770e4b7e0f4f0731"} Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.568128 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.568478 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a483ed2c58d75ebe5e5815408d75de57a2b51b0c7856dea0770e4b7e0f4f0731" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.638473 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6"] Feb 17 18:15:08 crc kubenswrapper[4680]: E0217 18:15:08.639259 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8db14b6-26ac-4b47-a820-ff6f4ce152dd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.639285 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8db14b6-26ac-4b47-a820-ff6f4ce152dd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 17 18:15:08 crc kubenswrapper[4680]: E0217 18:15:08.639303 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5f3080-2ae7-4012-8230-fc007a1a1b8f" containerName="collect-profiles" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.639361 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5f3080-2ae7-4012-8230-fc007a1a1b8f" containerName="collect-profiles" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.639598 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8db14b6-26ac-4b47-a820-ff6f4ce152dd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.639613 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd5f3080-2ae7-4012-8230-fc007a1a1b8f" containerName="collect-profiles" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.640385 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.643587 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.645202 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.645428 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.645576 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mhdwq" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.655984 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6"] Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.708482 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88ff3876-8728-4981-a3c7-0ccef82dee3e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g99q6\" (UID: \"88ff3876-8728-4981-a3c7-0ccef82dee3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.708548 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8qgb\" (UniqueName: \"kubernetes.io/projected/88ff3876-8728-4981-a3c7-0ccef82dee3e-kube-api-access-w8qgb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g99q6\" (UID: \"88ff3876-8728-4981-a3c7-0ccef82dee3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.708590 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88ff3876-8728-4981-a3c7-0ccef82dee3e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g99q6\" (UID: \"88ff3876-8728-4981-a3c7-0ccef82dee3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.810510 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88ff3876-8728-4981-a3c7-0ccef82dee3e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g99q6\" (UID: \"88ff3876-8728-4981-a3c7-0ccef82dee3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.810745 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8qgb\" (UniqueName: \"kubernetes.io/projected/88ff3876-8728-4981-a3c7-0ccef82dee3e-kube-api-access-w8qgb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g99q6\" (UID: \"88ff3876-8728-4981-a3c7-0ccef82dee3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.810883 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88ff3876-8728-4981-a3c7-0ccef82dee3e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g99q6\" (UID: \"88ff3876-8728-4981-a3c7-0ccef82dee3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.818598 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88ff3876-8728-4981-a3c7-0ccef82dee3e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g99q6\" (UID: \"88ff3876-8728-4981-a3c7-0ccef82dee3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.818642 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88ff3876-8728-4981-a3c7-0ccef82dee3e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g99q6\" (UID: \"88ff3876-8728-4981-a3c7-0ccef82dee3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.826414 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8qgb\" (UniqueName: \"kubernetes.io/projected/88ff3876-8728-4981-a3c7-0ccef82dee3e-kube-api-access-w8qgb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g99q6\" (UID: \"88ff3876-8728-4981-a3c7-0ccef82dee3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" Feb 17 18:15:08 crc kubenswrapper[4680]: I0217 18:15:08.990097 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" Feb 17 18:15:09 crc kubenswrapper[4680]: I0217 18:15:09.542027 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6"] Feb 17 18:15:09 crc kubenswrapper[4680]: I0217 18:15:09.577903 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" event={"ID":"88ff3876-8728-4981-a3c7-0ccef82dee3e","Type":"ContainerStarted","Data":"7550e7ec6998aafd91636e156f18dd4e8ec3f3bf54fc1b39c59d01ece1293a4a"} Feb 17 18:15:10 crc kubenswrapper[4680]: I0217 18:15:10.587494 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" event={"ID":"88ff3876-8728-4981-a3c7-0ccef82dee3e","Type":"ContainerStarted","Data":"dd877029d6dfcc785f8ed4b969e77fcb8702996041d1009ea88f42d8981d0e4f"} Feb 17 18:15:10 crc kubenswrapper[4680]: I0217 18:15:10.614380 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" podStartSLOduration=2.14954514 podStartE2EDuration="2.614350605s" podCreationTimestamp="2026-02-17 18:15:08 +0000 UTC" firstStartedPulling="2026-02-17 18:15:09.557290387 +0000 UTC m=+1839.108189066" lastFinishedPulling="2026-02-17 18:15:10.022095842 +0000 UTC m=+1839.572994531" observedRunningTime="2026-02-17 18:15:10.604259019 +0000 UTC m=+1840.155157728" watchObservedRunningTime="2026-02-17 18:15:10.614350605 +0000 UTC m=+1840.165249284" Feb 17 18:15:29 crc kubenswrapper[4680]: I0217 18:15:29.044186 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-h6gpf"] Feb 17 18:15:29 crc kubenswrapper[4680]: I0217 18:15:29.053211 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-h6gpf"] Feb 17 18:15:29 crc kubenswrapper[4680]: I0217 18:15:29.116817 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96e9fe6c-a137-4f3a-8ff4-3f1678a234cc" path="/var/lib/kubelet/pods/96e9fe6c-a137-4f3a-8ff4-3f1678a234cc/volumes" Feb 17 18:15:30 crc kubenswrapper[4680]: I0217 18:15:30.027241 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-wpttw"] Feb 17 18:15:30 crc kubenswrapper[4680]: I0217 18:15:30.033874 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-wpttw"] Feb 17 18:15:31 crc kubenswrapper[4680]: I0217 18:15:31.125471 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eea2a6c-918d-47a3-b782-9a0f035b6bdc" path="/var/lib/kubelet/pods/4eea2a6c-918d-47a3-b782-9a0f035b6bdc/volumes" Feb 17 18:15:47 crc kubenswrapper[4680]: I0217 18:15:47.871834 4680 generic.go:334] "Generic (PLEG): container finished" podID="88ff3876-8728-4981-a3c7-0ccef82dee3e" containerID="dd877029d6dfcc785f8ed4b969e77fcb8702996041d1009ea88f42d8981d0e4f" exitCode=0 Feb 17 18:15:47 crc kubenswrapper[4680]: I0217 18:15:47.872352 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" event={"ID":"88ff3876-8728-4981-a3c7-0ccef82dee3e","Type":"ContainerDied","Data":"dd877029d6dfcc785f8ed4b969e77fcb8702996041d1009ea88f42d8981d0e4f"} Feb 17 18:15:49 crc kubenswrapper[4680]: I0217 18:15:49.315902 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" Feb 17 18:15:49 crc kubenswrapper[4680]: I0217 18:15:49.412706 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88ff3876-8728-4981-a3c7-0ccef82dee3e-ssh-key-openstack-edpm-ipam\") pod \"88ff3876-8728-4981-a3c7-0ccef82dee3e\" (UID: \"88ff3876-8728-4981-a3c7-0ccef82dee3e\") " Feb 17 18:15:49 crc kubenswrapper[4680]: I0217 18:15:49.412777 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88ff3876-8728-4981-a3c7-0ccef82dee3e-inventory\") pod \"88ff3876-8728-4981-a3c7-0ccef82dee3e\" (UID: \"88ff3876-8728-4981-a3c7-0ccef82dee3e\") " Feb 17 18:15:49 crc kubenswrapper[4680]: I0217 18:15:49.412931 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8qgb\" (UniqueName: \"kubernetes.io/projected/88ff3876-8728-4981-a3c7-0ccef82dee3e-kube-api-access-w8qgb\") pod \"88ff3876-8728-4981-a3c7-0ccef82dee3e\" (UID: \"88ff3876-8728-4981-a3c7-0ccef82dee3e\") " Feb 17 18:15:49 crc kubenswrapper[4680]: I0217 18:15:49.425686 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88ff3876-8728-4981-a3c7-0ccef82dee3e-kube-api-access-w8qgb" (OuterVolumeSpecName: "kube-api-access-w8qgb") pod "88ff3876-8728-4981-a3c7-0ccef82dee3e" (UID: "88ff3876-8728-4981-a3c7-0ccef82dee3e"). InnerVolumeSpecName "kube-api-access-w8qgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:15:49 crc kubenswrapper[4680]: I0217 18:15:49.449485 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88ff3876-8728-4981-a3c7-0ccef82dee3e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "88ff3876-8728-4981-a3c7-0ccef82dee3e" (UID: "88ff3876-8728-4981-a3c7-0ccef82dee3e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:15:49 crc kubenswrapper[4680]: I0217 18:15:49.463436 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88ff3876-8728-4981-a3c7-0ccef82dee3e-inventory" (OuterVolumeSpecName: "inventory") pod "88ff3876-8728-4981-a3c7-0ccef82dee3e" (UID: "88ff3876-8728-4981-a3c7-0ccef82dee3e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:15:49 crc kubenswrapper[4680]: I0217 18:15:49.515879 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88ff3876-8728-4981-a3c7-0ccef82dee3e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:15:49 crc kubenswrapper[4680]: I0217 18:15:49.515923 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88ff3876-8728-4981-a3c7-0ccef82dee3e-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 18:15:49 crc kubenswrapper[4680]: I0217 18:15:49.515933 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8qgb\" (UniqueName: \"kubernetes.io/projected/88ff3876-8728-4981-a3c7-0ccef82dee3e-kube-api-access-w8qgb\") on node \"crc\" DevicePath \"\"" Feb 17 18:15:49 crc kubenswrapper[4680]: I0217 18:15:49.891691 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" event={"ID":"88ff3876-8728-4981-a3c7-0ccef82dee3e","Type":"ContainerDied","Data":"7550e7ec6998aafd91636e156f18dd4e8ec3f3bf54fc1b39c59d01ece1293a4a"} Feb 17 18:15:49 crc kubenswrapper[4680]: I0217 18:15:49.891733 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7550e7ec6998aafd91636e156f18dd4e8ec3f3bf54fc1b39c59d01ece1293a4a" Feb 17 18:15:49 crc kubenswrapper[4680]: I0217 18:15:49.891779 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g99q6" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.004880 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7"] Feb 17 18:15:50 crc kubenswrapper[4680]: E0217 18:15:50.005704 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ff3876-8728-4981-a3c7-0ccef82dee3e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.005729 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ff3876-8728-4981-a3c7-0ccef82dee3e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.006074 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="88ff3876-8728-4981-a3c7-0ccef82dee3e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.006881 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.008990 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.008999 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.009527 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.016690 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7"] Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.020748 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mhdwq" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.127288 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44aba16b-dafd-4ce2-8f13-01e7bb14a109-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7\" (UID: \"44aba16b-dafd-4ce2-8f13-01e7bb14a109\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.127481 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44aba16b-dafd-4ce2-8f13-01e7bb14a109-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7\" (UID: \"44aba16b-dafd-4ce2-8f13-01e7bb14a109\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.127516 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mgmv\" (UniqueName: \"kubernetes.io/projected/44aba16b-dafd-4ce2-8f13-01e7bb14a109-kube-api-access-8mgmv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7\" (UID: \"44aba16b-dafd-4ce2-8f13-01e7bb14a109\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.229712 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44aba16b-dafd-4ce2-8f13-01e7bb14a109-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7\" (UID: \"44aba16b-dafd-4ce2-8f13-01e7bb14a109\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.229772 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mgmv\" (UniqueName: \"kubernetes.io/projected/44aba16b-dafd-4ce2-8f13-01e7bb14a109-kube-api-access-8mgmv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7\" (UID: \"44aba16b-dafd-4ce2-8f13-01e7bb14a109\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.229998 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44aba16b-dafd-4ce2-8f13-01e7bb14a109-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7\" (UID: \"44aba16b-dafd-4ce2-8f13-01e7bb14a109\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.241182 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44aba16b-dafd-4ce2-8f13-01e7bb14a109-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7\" (UID: \"44aba16b-dafd-4ce2-8f13-01e7bb14a109\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.257954 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44aba16b-dafd-4ce2-8f13-01e7bb14a109-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7\" (UID: \"44aba16b-dafd-4ce2-8f13-01e7bb14a109\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.283050 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mgmv\" (UniqueName: \"kubernetes.io/projected/44aba16b-dafd-4ce2-8f13-01e7bb14a109-kube-api-access-8mgmv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7\" (UID: \"44aba16b-dafd-4ce2-8f13-01e7bb14a109\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.370509 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" Feb 17 18:15:50 crc kubenswrapper[4680]: I0217 18:15:50.950739 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7"] Feb 17 18:15:51 crc kubenswrapper[4680]: I0217 18:15:51.908568 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" event={"ID":"44aba16b-dafd-4ce2-8f13-01e7bb14a109","Type":"ContainerStarted","Data":"77def1ba76670c791bcacfa45cf7cc933c2ed1975be34c705c7b8967ffa851df"} Feb 17 18:15:52 crc kubenswrapper[4680]: I0217 18:15:52.916667 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" event={"ID":"44aba16b-dafd-4ce2-8f13-01e7bb14a109","Type":"ContainerStarted","Data":"b76e060537d42af892fd87ecf2d871d2385fb759dc90f2cd553db88d714d49e6"} Feb 17 18:15:52 crc kubenswrapper[4680]: I0217 18:15:52.936829 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" podStartSLOduration=3.135474985 podStartE2EDuration="3.93680839s" podCreationTimestamp="2026-02-17 18:15:49 +0000 UTC" firstStartedPulling="2026-02-17 18:15:50.934649236 +0000 UTC m=+1880.485547915" lastFinishedPulling="2026-02-17 18:15:51.735982641 +0000 UTC m=+1881.286881320" observedRunningTime="2026-02-17 18:15:52.932751712 +0000 UTC m=+1882.483650411" watchObservedRunningTime="2026-02-17 18:15:52.93680839 +0000 UTC m=+1882.487707069" Feb 17 18:15:54 crc kubenswrapper[4680]: I0217 18:15:54.841764 4680 scope.go:117] "RemoveContainer" containerID="546253378bee7adf3be3a64f591786189d53d7fd0d28df6c1ab890407ae80c8a" Feb 17 18:15:54 crc kubenswrapper[4680]: I0217 18:15:54.878395 4680 scope.go:117] "RemoveContainer" containerID="f31167c43edc01f5787eed5a377c18b9b57e257877217cc954bed19168a5f5c0" Feb 17 18:15:54 crc kubenswrapper[4680]: I0217 18:15:54.926036 4680 scope.go:117] "RemoveContainer" containerID="6486b7cb7c1814dd6bb8fa0855f1fe117cdff4dfc777924da50e61487487e818" Feb 17 18:16:09 crc kubenswrapper[4680]: I0217 18:16:09.044539 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-5pmz5"] Feb 17 18:16:09 crc kubenswrapper[4680]: I0217 18:16:09.058102 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-5pmz5"] Feb 17 18:16:09 crc kubenswrapper[4680]: I0217 18:16:09.115685 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb67070-334f-40f4-9fb4-789f761e50fc" path="/var/lib/kubelet/pods/aeb67070-334f-40f4-9fb4-789f761e50fc/volumes" Feb 17 18:16:40 crc kubenswrapper[4680]: I0217 18:16:40.310931 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-7866795846-vgwfn" podUID="6f60616b-3009-4961-af69-11a3d3e42f4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 18:16:41 crc kubenswrapper[4680]: I0217 18:16:41.395172 4680 generic.go:334] "Generic (PLEG): container finished" podID="44aba16b-dafd-4ce2-8f13-01e7bb14a109" containerID="b76e060537d42af892fd87ecf2d871d2385fb759dc90f2cd553db88d714d49e6" exitCode=0 Feb 17 18:16:41 crc kubenswrapper[4680]: I0217 18:16:41.395259 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" event={"ID":"44aba16b-dafd-4ce2-8f13-01e7bb14a109","Type":"ContainerDied","Data":"b76e060537d42af892fd87ecf2d871d2385fb759dc90f2cd553db88d714d49e6"} Feb 17 18:16:42 crc kubenswrapper[4680]: I0217 18:16:42.825458 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" Feb 17 18:16:42 crc kubenswrapper[4680]: I0217 18:16:42.861059 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44aba16b-dafd-4ce2-8f13-01e7bb14a109-ssh-key-openstack-edpm-ipam\") pod \"44aba16b-dafd-4ce2-8f13-01e7bb14a109\" (UID: \"44aba16b-dafd-4ce2-8f13-01e7bb14a109\") " Feb 17 18:16:42 crc kubenswrapper[4680]: I0217 18:16:42.861183 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mgmv\" (UniqueName: \"kubernetes.io/projected/44aba16b-dafd-4ce2-8f13-01e7bb14a109-kube-api-access-8mgmv\") pod \"44aba16b-dafd-4ce2-8f13-01e7bb14a109\" (UID: \"44aba16b-dafd-4ce2-8f13-01e7bb14a109\") " Feb 17 18:16:42 crc kubenswrapper[4680]: I0217 18:16:42.861356 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44aba16b-dafd-4ce2-8f13-01e7bb14a109-inventory\") pod \"44aba16b-dafd-4ce2-8f13-01e7bb14a109\" (UID: \"44aba16b-dafd-4ce2-8f13-01e7bb14a109\") " Feb 17 18:16:42 crc kubenswrapper[4680]: I0217 18:16:42.869050 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44aba16b-dafd-4ce2-8f13-01e7bb14a109-kube-api-access-8mgmv" (OuterVolumeSpecName: "kube-api-access-8mgmv") pod "44aba16b-dafd-4ce2-8f13-01e7bb14a109" (UID: "44aba16b-dafd-4ce2-8f13-01e7bb14a109"). InnerVolumeSpecName "kube-api-access-8mgmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:16:42 crc kubenswrapper[4680]: I0217 18:16:42.892369 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44aba16b-dafd-4ce2-8f13-01e7bb14a109-inventory" (OuterVolumeSpecName: "inventory") pod "44aba16b-dafd-4ce2-8f13-01e7bb14a109" (UID: "44aba16b-dafd-4ce2-8f13-01e7bb14a109"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:16:42 crc kubenswrapper[4680]: I0217 18:16:42.892468 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44aba16b-dafd-4ce2-8f13-01e7bb14a109-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "44aba16b-dafd-4ce2-8f13-01e7bb14a109" (UID: "44aba16b-dafd-4ce2-8f13-01e7bb14a109"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:16:42 crc kubenswrapper[4680]: I0217 18:16:42.963374 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44aba16b-dafd-4ce2-8f13-01e7bb14a109-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 18:16:42 crc kubenswrapper[4680]: I0217 18:16:42.963412 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44aba16b-dafd-4ce2-8f13-01e7bb14a109-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:16:42 crc kubenswrapper[4680]: I0217 18:16:42.963424 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mgmv\" (UniqueName: \"kubernetes.io/projected/44aba16b-dafd-4ce2-8f13-01e7bb14a109-kube-api-access-8mgmv\") on node \"crc\" DevicePath \"\"" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.414039 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" event={"ID":"44aba16b-dafd-4ce2-8f13-01e7bb14a109","Type":"ContainerDied","Data":"77def1ba76670c791bcacfa45cf7cc933c2ed1975be34c705c7b8967ffa851df"} Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.414410 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77def1ba76670c791bcacfa45cf7cc933c2ed1975be34c705c7b8967ffa851df" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.414085 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.562696 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5gb6z"] Feb 17 18:16:43 crc kubenswrapper[4680]: E0217 18:16:43.563210 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44aba16b-dafd-4ce2-8f13-01e7bb14a109" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.563233 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="44aba16b-dafd-4ce2-8f13-01e7bb14a109" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.563468 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="44aba16b-dafd-4ce2-8f13-01e7bb14a109" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.564187 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.565937 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.566006 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.566048 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.566493 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mhdwq" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.582140 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5gb6z"] Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.675513 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad93d3c0-6115-499e-b0db-6840bda12a02-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5gb6z\" (UID: \"ad93d3c0-6115-499e-b0db-6840bda12a02\") " pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.675629 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv8dp\" (UniqueName: \"kubernetes.io/projected/ad93d3c0-6115-499e-b0db-6840bda12a02-kube-api-access-dv8dp\") pod \"ssh-known-hosts-edpm-deployment-5gb6z\" (UID: \"ad93d3c0-6115-499e-b0db-6840bda12a02\") " pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.675671 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ad93d3c0-6115-499e-b0db-6840bda12a02-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5gb6z\" (UID: \"ad93d3c0-6115-499e-b0db-6840bda12a02\") " pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.777988 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv8dp\" (UniqueName: \"kubernetes.io/projected/ad93d3c0-6115-499e-b0db-6840bda12a02-kube-api-access-dv8dp\") pod \"ssh-known-hosts-edpm-deployment-5gb6z\" (UID: \"ad93d3c0-6115-499e-b0db-6840bda12a02\") " pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.778082 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ad93d3c0-6115-499e-b0db-6840bda12a02-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5gb6z\" (UID: \"ad93d3c0-6115-499e-b0db-6840bda12a02\") " pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.778244 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad93d3c0-6115-499e-b0db-6840bda12a02-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5gb6z\" (UID: \"ad93d3c0-6115-499e-b0db-6840bda12a02\") " pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.786214 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ad93d3c0-6115-499e-b0db-6840bda12a02-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5gb6z\" (UID: \"ad93d3c0-6115-499e-b0db-6840bda12a02\") " pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.786481 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad93d3c0-6115-499e-b0db-6840bda12a02-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5gb6z\" (UID: \"ad93d3c0-6115-499e-b0db-6840bda12a02\") " pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.799695 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv8dp\" (UniqueName: \"kubernetes.io/projected/ad93d3c0-6115-499e-b0db-6840bda12a02-kube-api-access-dv8dp\") pod \"ssh-known-hosts-edpm-deployment-5gb6z\" (UID: \"ad93d3c0-6115-499e-b0db-6840bda12a02\") " pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" Feb 17 18:16:43 crc kubenswrapper[4680]: I0217 18:16:43.882595 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" Feb 17 18:16:44 crc kubenswrapper[4680]: I0217 18:16:44.427416 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5gb6z"] Feb 17 18:16:44 crc kubenswrapper[4680]: I0217 18:16:44.435126 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 18:16:45 crc kubenswrapper[4680]: I0217 18:16:45.433517 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" event={"ID":"ad93d3c0-6115-499e-b0db-6840bda12a02","Type":"ContainerStarted","Data":"1c22118a246ddb49f6287e33ab7731105b6dcac878cb92a271bb8c32308d109a"} Feb 17 18:16:45 crc kubenswrapper[4680]: I0217 18:16:45.434057 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" event={"ID":"ad93d3c0-6115-499e-b0db-6840bda12a02","Type":"ContainerStarted","Data":"5fb93f775ecbb5c613cb0f313693d5eac8a82496ca518e220ce9ee08041206ef"} Feb 17 18:16:45 crc kubenswrapper[4680]: I0217 18:16:45.454726 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" podStartSLOduration=2.038832013 podStartE2EDuration="2.454706752s" podCreationTimestamp="2026-02-17 18:16:43 +0000 UTC" firstStartedPulling="2026-02-17 18:16:44.434845782 +0000 UTC m=+1933.985744461" lastFinishedPulling="2026-02-17 18:16:44.850720521 +0000 UTC m=+1934.401619200" observedRunningTime="2026-02-17 18:16:45.447173956 +0000 UTC m=+1934.998072635" watchObservedRunningTime="2026-02-17 18:16:45.454706752 +0000 UTC m=+1935.005605421" Feb 17 18:16:52 crc kubenswrapper[4680]: I0217 18:16:52.492444 4680 generic.go:334] "Generic (PLEG): container finished" podID="ad93d3c0-6115-499e-b0db-6840bda12a02" containerID="1c22118a246ddb49f6287e33ab7731105b6dcac878cb92a271bb8c32308d109a" exitCode=0 Feb 17 18:16:52 crc kubenswrapper[4680]: I0217 18:16:52.492500 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" event={"ID":"ad93d3c0-6115-499e-b0db-6840bda12a02","Type":"ContainerDied","Data":"1c22118a246ddb49f6287e33ab7731105b6dcac878cb92a271bb8c32308d109a"} Feb 17 18:16:53 crc kubenswrapper[4680]: I0217 18:16:53.930983 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" Feb 17 18:16:53 crc kubenswrapper[4680]: I0217 18:16:53.989950 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad93d3c0-6115-499e-b0db-6840bda12a02-ssh-key-openstack-edpm-ipam\") pod \"ad93d3c0-6115-499e-b0db-6840bda12a02\" (UID: \"ad93d3c0-6115-499e-b0db-6840bda12a02\") " Feb 17 18:16:53 crc kubenswrapper[4680]: I0217 18:16:53.990043 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ad93d3c0-6115-499e-b0db-6840bda12a02-inventory-0\") pod \"ad93d3c0-6115-499e-b0db-6840bda12a02\" (UID: \"ad93d3c0-6115-499e-b0db-6840bda12a02\") " Feb 17 18:16:53 crc kubenswrapper[4680]: I0217 18:16:53.990113 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv8dp\" (UniqueName: \"kubernetes.io/projected/ad93d3c0-6115-499e-b0db-6840bda12a02-kube-api-access-dv8dp\") pod \"ad93d3c0-6115-499e-b0db-6840bda12a02\" (UID: \"ad93d3c0-6115-499e-b0db-6840bda12a02\") " Feb 17 18:16:53 crc kubenswrapper[4680]: I0217 18:16:53.995985 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad93d3c0-6115-499e-b0db-6840bda12a02-kube-api-access-dv8dp" (OuterVolumeSpecName: "kube-api-access-dv8dp") pod "ad93d3c0-6115-499e-b0db-6840bda12a02" (UID: "ad93d3c0-6115-499e-b0db-6840bda12a02"). InnerVolumeSpecName "kube-api-access-dv8dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.021814 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad93d3c0-6115-499e-b0db-6840bda12a02-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ad93d3c0-6115-499e-b0db-6840bda12a02" (UID: "ad93d3c0-6115-499e-b0db-6840bda12a02"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.022991 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad93d3c0-6115-499e-b0db-6840bda12a02-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "ad93d3c0-6115-499e-b0db-6840bda12a02" (UID: "ad93d3c0-6115-499e-b0db-6840bda12a02"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.092872 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad93d3c0-6115-499e-b0db-6840bda12a02-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.092903 4680 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ad93d3c0-6115-499e-b0db-6840bda12a02-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.092914 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv8dp\" (UniqueName: \"kubernetes.io/projected/ad93d3c0-6115-499e-b0db-6840bda12a02-kube-api-access-dv8dp\") on node \"crc\" DevicePath \"\"" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.509246 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" event={"ID":"ad93d3c0-6115-499e-b0db-6840bda12a02","Type":"ContainerDied","Data":"5fb93f775ecbb5c613cb0f313693d5eac8a82496ca518e220ce9ee08041206ef"} Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.509300 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fb93f775ecbb5c613cb0f313693d5eac8a82496ca518e220ce9ee08041206ef" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.509295 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5gb6z" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.613216 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9"] Feb 17 18:16:54 crc kubenswrapper[4680]: E0217 18:16:54.613735 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad93d3c0-6115-499e-b0db-6840bda12a02" containerName="ssh-known-hosts-edpm-deployment" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.613762 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad93d3c0-6115-499e-b0db-6840bda12a02" containerName="ssh-known-hosts-edpm-deployment" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.614007 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad93d3c0-6115-499e-b0db-6840bda12a02" containerName="ssh-known-hosts-edpm-deployment" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.615505 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.619972 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.620278 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.620529 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mhdwq" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.621851 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.632506 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9"] Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.803407 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gzck\" (UniqueName: \"kubernetes.io/projected/5731e4b0-b432-4c63-aa65-59eed4740731-kube-api-access-5gzck\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h76n9\" (UID: \"5731e4b0-b432-4c63-aa65-59eed4740731\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.803503 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5731e4b0-b432-4c63-aa65-59eed4740731-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h76n9\" (UID: \"5731e4b0-b432-4c63-aa65-59eed4740731\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.803668 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5731e4b0-b432-4c63-aa65-59eed4740731-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h76n9\" (UID: \"5731e4b0-b432-4c63-aa65-59eed4740731\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.905019 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5731e4b0-b432-4c63-aa65-59eed4740731-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h76n9\" (UID: \"5731e4b0-b432-4c63-aa65-59eed4740731\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.905129 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gzck\" (UniqueName: \"kubernetes.io/projected/5731e4b0-b432-4c63-aa65-59eed4740731-kube-api-access-5gzck\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h76n9\" (UID: \"5731e4b0-b432-4c63-aa65-59eed4740731\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.905181 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5731e4b0-b432-4c63-aa65-59eed4740731-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h76n9\" (UID: \"5731e4b0-b432-4c63-aa65-59eed4740731\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.911933 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5731e4b0-b432-4c63-aa65-59eed4740731-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h76n9\" (UID: \"5731e4b0-b432-4c63-aa65-59eed4740731\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.911979 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5731e4b0-b432-4c63-aa65-59eed4740731-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h76n9\" (UID: \"5731e4b0-b432-4c63-aa65-59eed4740731\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.927980 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gzck\" (UniqueName: \"kubernetes.io/projected/5731e4b0-b432-4c63-aa65-59eed4740731-kube-api-access-5gzck\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h76n9\" (UID: \"5731e4b0-b432-4c63-aa65-59eed4740731\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" Feb 17 18:16:54 crc kubenswrapper[4680]: I0217 18:16:54.943101 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" Feb 17 18:16:55 crc kubenswrapper[4680]: I0217 18:16:55.047245 4680 scope.go:117] "RemoveContainer" containerID="a322e20a7b28592e732e0fb47b521397de5badc470bdbcdd12c691efb8633f2c" Feb 17 18:16:55 crc kubenswrapper[4680]: I0217 18:16:55.443227 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9"] Feb 17 18:16:55 crc kubenswrapper[4680]: I0217 18:16:55.518480 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" event={"ID":"5731e4b0-b432-4c63-aa65-59eed4740731","Type":"ContainerStarted","Data":"37b8a8ed1b2ad4e10b84ef10e83355f94727f8166d9605ee8f48d2341b07fa47"} Feb 17 18:16:56 crc kubenswrapper[4680]: I0217 18:16:56.527298 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" event={"ID":"5731e4b0-b432-4c63-aa65-59eed4740731","Type":"ContainerStarted","Data":"9269470787cb655ffa99fa98d0b8904e9680e77ea1102894510ebea29cfc068d"} Feb 17 18:16:56 crc kubenswrapper[4680]: I0217 18:16:56.550183 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" podStartSLOduration=2.044769343 podStartE2EDuration="2.550161052s" podCreationTimestamp="2026-02-17 18:16:54 +0000 UTC" firstStartedPulling="2026-02-17 18:16:55.446715458 +0000 UTC m=+1944.997614137" lastFinishedPulling="2026-02-17 18:16:55.952107167 +0000 UTC m=+1945.503005846" observedRunningTime="2026-02-17 18:16:56.541863192 +0000 UTC m=+1946.092761891" watchObservedRunningTime="2026-02-17 18:16:56.550161052 +0000 UTC m=+1946.101059731" Feb 17 18:17:04 crc kubenswrapper[4680]: I0217 18:17:04.588384 4680 generic.go:334] "Generic (PLEG): container finished" podID="5731e4b0-b432-4c63-aa65-59eed4740731" containerID="9269470787cb655ffa99fa98d0b8904e9680e77ea1102894510ebea29cfc068d" exitCode=0 Feb 17 18:17:04 crc kubenswrapper[4680]: I0217 18:17:04.588443 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" event={"ID":"5731e4b0-b432-4c63-aa65-59eed4740731","Type":"ContainerDied","Data":"9269470787cb655ffa99fa98d0b8904e9680e77ea1102894510ebea29cfc068d"} Feb 17 18:17:05 crc kubenswrapper[4680]: I0217 18:17:05.588471 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:17:05 crc kubenswrapper[4680]: I0217 18:17:05.589000 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:17:05 crc kubenswrapper[4680]: I0217 18:17:05.999007 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.033860 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5731e4b0-b432-4c63-aa65-59eed4740731-ssh-key-openstack-edpm-ipam\") pod \"5731e4b0-b432-4c63-aa65-59eed4740731\" (UID: \"5731e4b0-b432-4c63-aa65-59eed4740731\") " Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.033922 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gzck\" (UniqueName: \"kubernetes.io/projected/5731e4b0-b432-4c63-aa65-59eed4740731-kube-api-access-5gzck\") pod \"5731e4b0-b432-4c63-aa65-59eed4740731\" (UID: \"5731e4b0-b432-4c63-aa65-59eed4740731\") " Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.034163 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5731e4b0-b432-4c63-aa65-59eed4740731-inventory\") pod \"5731e4b0-b432-4c63-aa65-59eed4740731\" (UID: \"5731e4b0-b432-4c63-aa65-59eed4740731\") " Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.042820 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5731e4b0-b432-4c63-aa65-59eed4740731-kube-api-access-5gzck" (OuterVolumeSpecName: "kube-api-access-5gzck") pod "5731e4b0-b432-4c63-aa65-59eed4740731" (UID: "5731e4b0-b432-4c63-aa65-59eed4740731"). InnerVolumeSpecName "kube-api-access-5gzck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.064949 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5731e4b0-b432-4c63-aa65-59eed4740731-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5731e4b0-b432-4c63-aa65-59eed4740731" (UID: "5731e4b0-b432-4c63-aa65-59eed4740731"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.068752 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5731e4b0-b432-4c63-aa65-59eed4740731-inventory" (OuterVolumeSpecName: "inventory") pod "5731e4b0-b432-4c63-aa65-59eed4740731" (UID: "5731e4b0-b432-4c63-aa65-59eed4740731"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.136849 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5731e4b0-b432-4c63-aa65-59eed4740731-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.136985 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gzck\" (UniqueName: \"kubernetes.io/projected/5731e4b0-b432-4c63-aa65-59eed4740731-kube-api-access-5gzck\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.137765 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5731e4b0-b432-4c63-aa65-59eed4740731-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.607838 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" event={"ID":"5731e4b0-b432-4c63-aa65-59eed4740731","Type":"ContainerDied","Data":"37b8a8ed1b2ad4e10b84ef10e83355f94727f8166d9605ee8f48d2341b07fa47"} Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.607873 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37b8a8ed1b2ad4e10b84ef10e83355f94727f8166d9605ee8f48d2341b07fa47" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.607885 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h76n9" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.683684 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b"] Feb 17 18:17:06 crc kubenswrapper[4680]: E0217 18:17:06.684075 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5731e4b0-b432-4c63-aa65-59eed4740731" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.684092 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5731e4b0-b432-4c63-aa65-59eed4740731" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.684252 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5731e4b0-b432-4c63-aa65-59eed4740731" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.686180 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.690013 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.690279 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mhdwq" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.690665 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.691407 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.703271 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b"] Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.792384 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkqt8\" (UniqueName: \"kubernetes.io/projected/aaa35e71-4185-4074-8f0a-eb601168a788-kube-api-access-zkqt8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b\" (UID: \"aaa35e71-4185-4074-8f0a-eb601168a788\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.792449 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aaa35e71-4185-4074-8f0a-eb601168a788-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b\" (UID: \"aaa35e71-4185-4074-8f0a-eb601168a788\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.792834 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aaa35e71-4185-4074-8f0a-eb601168a788-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b\" (UID: \"aaa35e71-4185-4074-8f0a-eb601168a788\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.894416 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkqt8\" (UniqueName: \"kubernetes.io/projected/aaa35e71-4185-4074-8f0a-eb601168a788-kube-api-access-zkqt8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b\" (UID: \"aaa35e71-4185-4074-8f0a-eb601168a788\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.894475 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aaa35e71-4185-4074-8f0a-eb601168a788-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b\" (UID: \"aaa35e71-4185-4074-8f0a-eb601168a788\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.895192 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aaa35e71-4185-4074-8f0a-eb601168a788-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b\" (UID: \"aaa35e71-4185-4074-8f0a-eb601168a788\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.903911 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aaa35e71-4185-4074-8f0a-eb601168a788-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b\" (UID: \"aaa35e71-4185-4074-8f0a-eb601168a788\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.904034 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aaa35e71-4185-4074-8f0a-eb601168a788-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b\" (UID: \"aaa35e71-4185-4074-8f0a-eb601168a788\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" Feb 17 18:17:06 crc kubenswrapper[4680]: I0217 18:17:06.915415 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkqt8\" (UniqueName: \"kubernetes.io/projected/aaa35e71-4185-4074-8f0a-eb601168a788-kube-api-access-zkqt8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b\" (UID: \"aaa35e71-4185-4074-8f0a-eb601168a788\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" Feb 17 18:17:07 crc kubenswrapper[4680]: I0217 18:17:07.011043 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" Feb 17 18:17:07 crc kubenswrapper[4680]: I0217 18:17:07.549022 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b"] Feb 17 18:17:07 crc kubenswrapper[4680]: I0217 18:17:07.617670 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" event={"ID":"aaa35e71-4185-4074-8f0a-eb601168a788","Type":"ContainerStarted","Data":"ee33f3cd1aaae0ca2becabb9613ae9acf53a98e8c0f3c07072d33f7476a78ccc"} Feb 17 18:17:08 crc kubenswrapper[4680]: I0217 18:17:08.637534 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" event={"ID":"aaa35e71-4185-4074-8f0a-eb601168a788","Type":"ContainerStarted","Data":"00454dcb52077d42b6b5a47ebcc6fd475de753adcf7d015026e4b71a696a447b"} Feb 17 18:17:08 crc kubenswrapper[4680]: I0217 18:17:08.670624 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" podStartSLOduration=2.272434391 podStartE2EDuration="2.670600444s" podCreationTimestamp="2026-02-17 18:17:06 +0000 UTC" firstStartedPulling="2026-02-17 18:17:07.55409683 +0000 UTC m=+1957.104995519" lastFinishedPulling="2026-02-17 18:17:07.952262893 +0000 UTC m=+1957.503161572" observedRunningTime="2026-02-17 18:17:08.658498164 +0000 UTC m=+1958.209396853" watchObservedRunningTime="2026-02-17 18:17:08.670600444 +0000 UTC m=+1958.221499133" Feb 17 18:17:17 crc kubenswrapper[4680]: I0217 18:17:17.709382 4680 generic.go:334] "Generic (PLEG): container finished" podID="aaa35e71-4185-4074-8f0a-eb601168a788" containerID="00454dcb52077d42b6b5a47ebcc6fd475de753adcf7d015026e4b71a696a447b" exitCode=0 Feb 17 18:17:17 crc kubenswrapper[4680]: I0217 18:17:17.709453 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" event={"ID":"aaa35e71-4185-4074-8f0a-eb601168a788","Type":"ContainerDied","Data":"00454dcb52077d42b6b5a47ebcc6fd475de753adcf7d015026e4b71a696a447b"} Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.112554 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.240493 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aaa35e71-4185-4074-8f0a-eb601168a788-inventory\") pod \"aaa35e71-4185-4074-8f0a-eb601168a788\" (UID: \"aaa35e71-4185-4074-8f0a-eb601168a788\") " Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.240839 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkqt8\" (UniqueName: \"kubernetes.io/projected/aaa35e71-4185-4074-8f0a-eb601168a788-kube-api-access-zkqt8\") pod \"aaa35e71-4185-4074-8f0a-eb601168a788\" (UID: \"aaa35e71-4185-4074-8f0a-eb601168a788\") " Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.241152 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aaa35e71-4185-4074-8f0a-eb601168a788-ssh-key-openstack-edpm-ipam\") pod \"aaa35e71-4185-4074-8f0a-eb601168a788\" (UID: \"aaa35e71-4185-4074-8f0a-eb601168a788\") " Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.247060 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaa35e71-4185-4074-8f0a-eb601168a788-kube-api-access-zkqt8" (OuterVolumeSpecName: "kube-api-access-zkqt8") pod "aaa35e71-4185-4074-8f0a-eb601168a788" (UID: "aaa35e71-4185-4074-8f0a-eb601168a788"). InnerVolumeSpecName "kube-api-access-zkqt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.270474 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa35e71-4185-4074-8f0a-eb601168a788-inventory" (OuterVolumeSpecName: "inventory") pod "aaa35e71-4185-4074-8f0a-eb601168a788" (UID: "aaa35e71-4185-4074-8f0a-eb601168a788"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.271679 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa35e71-4185-4074-8f0a-eb601168a788-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "aaa35e71-4185-4074-8f0a-eb601168a788" (UID: "aaa35e71-4185-4074-8f0a-eb601168a788"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.343536 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aaa35e71-4185-4074-8f0a-eb601168a788-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.343780 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkqt8\" (UniqueName: \"kubernetes.io/projected/aaa35e71-4185-4074-8f0a-eb601168a788-kube-api-access-zkqt8\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.343798 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aaa35e71-4185-4074-8f0a-eb601168a788-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.726576 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" event={"ID":"aaa35e71-4185-4074-8f0a-eb601168a788","Type":"ContainerDied","Data":"ee33f3cd1aaae0ca2becabb9613ae9acf53a98e8c0f3c07072d33f7476a78ccc"} Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.726615 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee33f3cd1aaae0ca2becabb9613ae9acf53a98e8c0f3c07072d33f7476a78ccc" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.726669 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.820783 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf"] Feb 17 18:17:19 crc kubenswrapper[4680]: E0217 18:17:19.821154 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa35e71-4185-4074-8f0a-eb601168a788" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.821171 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa35e71-4185-4074-8f0a-eb601168a788" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.821426 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa35e71-4185-4074-8f0a-eb601168a788" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.822012 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.831755 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.832423 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.832466 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.832755 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.832896 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.832909 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.833159 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.835947 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mhdwq" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.846037 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf"] Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.889190 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrdbp\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-kube-api-access-zrdbp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.889481 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.889590 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.889694 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.889814 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.889959 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.890163 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.890255 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.890401 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.890552 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.890688 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.890812 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.890960 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.891084 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.992075 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.992441 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.992528 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.992626 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrdbp\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-kube-api-access-zrdbp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.992719 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.992800 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.992893 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.993260 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.993422 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.993549 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.993629 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.993719 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.993814 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.993892 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:19 crc kubenswrapper[4680]: I0217 18:17:19.998924 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:20 crc kubenswrapper[4680]: I0217 18:17:19.999014 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:20 crc kubenswrapper[4680]: I0217 18:17:19.999274 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:20 crc kubenswrapper[4680]: I0217 18:17:19.999374 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:20 crc kubenswrapper[4680]: I0217 18:17:19.999693 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:20 crc kubenswrapper[4680]: I0217 18:17:19.999697 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:20 crc kubenswrapper[4680]: I0217 18:17:20.000198 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:20 crc kubenswrapper[4680]: I0217 18:17:20.001042 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:20 crc kubenswrapper[4680]: I0217 18:17:20.001225 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:20 crc kubenswrapper[4680]: I0217 18:17:20.003272 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:20 crc kubenswrapper[4680]: I0217 18:17:20.003712 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:20 crc kubenswrapper[4680]: I0217 18:17:20.004282 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:20 crc kubenswrapper[4680]: I0217 18:17:20.011862 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:20 crc kubenswrapper[4680]: I0217 18:17:20.014295 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrdbp\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-kube-api-access-zrdbp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-flssf\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:20 crc kubenswrapper[4680]: I0217 18:17:20.141051 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:20 crc kubenswrapper[4680]: I0217 18:17:20.754770 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf"] Feb 17 18:17:21 crc kubenswrapper[4680]: I0217 18:17:21.743027 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" event={"ID":"3cb6431e-7411-4f54-9b55-6ba091b50804","Type":"ContainerStarted","Data":"bed071428bb2d723f64b50e1981b4b13c30508a05ec97ece87201a66e7d6f231"} Feb 17 18:17:21 crc kubenswrapper[4680]: I0217 18:17:21.743350 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" event={"ID":"3cb6431e-7411-4f54-9b55-6ba091b50804","Type":"ContainerStarted","Data":"20628be17acd4fd3c4ace4f51565378cb101a52ebc4e75d2bbccce1e891247eb"} Feb 17 18:17:21 crc kubenswrapper[4680]: I0217 18:17:21.767722 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" podStartSLOduration=2.154108638 podStartE2EDuration="2.767695861s" podCreationTimestamp="2026-02-17 18:17:19 +0000 UTC" firstStartedPulling="2026-02-17 18:17:20.765525935 +0000 UTC m=+1970.316424614" lastFinishedPulling="2026-02-17 18:17:21.379113158 +0000 UTC m=+1970.930011837" observedRunningTime="2026-02-17 18:17:21.759357899 +0000 UTC m=+1971.310256578" watchObservedRunningTime="2026-02-17 18:17:21.767695861 +0000 UTC m=+1971.318594550" Feb 17 18:17:35 crc kubenswrapper[4680]: I0217 18:17:35.589292 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:17:35 crc kubenswrapper[4680]: I0217 18:17:35.589832 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:17:56 crc kubenswrapper[4680]: I0217 18:17:56.029144 4680 generic.go:334] "Generic (PLEG): container finished" podID="3cb6431e-7411-4f54-9b55-6ba091b50804" containerID="bed071428bb2d723f64b50e1981b4b13c30508a05ec97ece87201a66e7d6f231" exitCode=0 Feb 17 18:17:56 crc kubenswrapper[4680]: I0217 18:17:56.029243 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" event={"ID":"3cb6431e-7411-4f54-9b55-6ba091b50804","Type":"ContainerDied","Data":"bed071428bb2d723f64b50e1981b4b13c30508a05ec97ece87201a66e7d6f231"} Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.491525 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.624906 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrdbp\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-kube-api-access-zrdbp\") pod \"3cb6431e-7411-4f54-9b55-6ba091b50804\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.625247 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"3cb6431e-7411-4f54-9b55-6ba091b50804\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.625806 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-ovn-default-certs-0\") pod \"3cb6431e-7411-4f54-9b55-6ba091b50804\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.625918 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-inventory\") pod \"3cb6431e-7411-4f54-9b55-6ba091b50804\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.626042 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-nova-combined-ca-bundle\") pod \"3cb6431e-7411-4f54-9b55-6ba091b50804\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.626214 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-libvirt-combined-ca-bundle\") pod \"3cb6431e-7411-4f54-9b55-6ba091b50804\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.626361 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-bootstrap-combined-ca-bundle\") pod \"3cb6431e-7411-4f54-9b55-6ba091b50804\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.626884 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"3cb6431e-7411-4f54-9b55-6ba091b50804\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.627064 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-ovn-combined-ca-bundle\") pod \"3cb6431e-7411-4f54-9b55-6ba091b50804\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.627194 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-telemetry-combined-ca-bundle\") pod \"3cb6431e-7411-4f54-9b55-6ba091b50804\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.627679 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-ssh-key-openstack-edpm-ipam\") pod \"3cb6431e-7411-4f54-9b55-6ba091b50804\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.627812 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-neutron-metadata-combined-ca-bundle\") pod \"3cb6431e-7411-4f54-9b55-6ba091b50804\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.627952 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"3cb6431e-7411-4f54-9b55-6ba091b50804\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.628145 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-repo-setup-combined-ca-bundle\") pod \"3cb6431e-7411-4f54-9b55-6ba091b50804\" (UID: \"3cb6431e-7411-4f54-9b55-6ba091b50804\") " Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.632496 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3cb6431e-7411-4f54-9b55-6ba091b50804" (UID: "3cb6431e-7411-4f54-9b55-6ba091b50804"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.632760 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-kube-api-access-zrdbp" (OuterVolumeSpecName: "kube-api-access-zrdbp") pod "3cb6431e-7411-4f54-9b55-6ba091b50804" (UID: "3cb6431e-7411-4f54-9b55-6ba091b50804"). InnerVolumeSpecName "kube-api-access-zrdbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.633464 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "3cb6431e-7411-4f54-9b55-6ba091b50804" (UID: "3cb6431e-7411-4f54-9b55-6ba091b50804"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.633479 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "3cb6431e-7411-4f54-9b55-6ba091b50804" (UID: "3cb6431e-7411-4f54-9b55-6ba091b50804"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.633543 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "3cb6431e-7411-4f54-9b55-6ba091b50804" (UID: "3cb6431e-7411-4f54-9b55-6ba091b50804"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.634454 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "3cb6431e-7411-4f54-9b55-6ba091b50804" (UID: "3cb6431e-7411-4f54-9b55-6ba091b50804"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.635017 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "3cb6431e-7411-4f54-9b55-6ba091b50804" (UID: "3cb6431e-7411-4f54-9b55-6ba091b50804"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.636025 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "3cb6431e-7411-4f54-9b55-6ba091b50804" (UID: "3cb6431e-7411-4f54-9b55-6ba091b50804"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.636436 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3cb6431e-7411-4f54-9b55-6ba091b50804" (UID: "3cb6431e-7411-4f54-9b55-6ba091b50804"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.637065 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3cb6431e-7411-4f54-9b55-6ba091b50804" (UID: "3cb6431e-7411-4f54-9b55-6ba091b50804"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.648894 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "3cb6431e-7411-4f54-9b55-6ba091b50804" (UID: "3cb6431e-7411-4f54-9b55-6ba091b50804"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.655444 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "3cb6431e-7411-4f54-9b55-6ba091b50804" (UID: "3cb6431e-7411-4f54-9b55-6ba091b50804"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.673898 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-inventory" (OuterVolumeSpecName: "inventory") pod "3cb6431e-7411-4f54-9b55-6ba091b50804" (UID: "3cb6431e-7411-4f54-9b55-6ba091b50804"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.684292 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3cb6431e-7411-4f54-9b55-6ba091b50804" (UID: "3cb6431e-7411-4f54-9b55-6ba091b50804"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.730959 4680 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.730998 4680 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.731011 4680 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.731024 4680 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.731038 4680 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.731049 4680 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.731059 4680 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.731070 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.731081 4680 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.731093 4680 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.731107 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrdbp\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-kube-api-access-zrdbp\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.731119 4680 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.731130 4680 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3cb6431e-7411-4f54-9b55-6ba091b50804-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:57 crc kubenswrapper[4680]: I0217 18:17:57.731142 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cb6431e-7411-4f54-9b55-6ba091b50804-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.046179 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" event={"ID":"3cb6431e-7411-4f54-9b55-6ba091b50804","Type":"ContainerDied","Data":"20628be17acd4fd3c4ace4f51565378cb101a52ebc4e75d2bbccce1e891247eb"} Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.046234 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20628be17acd4fd3c4ace4f51565378cb101a52ebc4e75d2bbccce1e891247eb" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.046333 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-flssf" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.160463 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm"] Feb 17 18:17:58 crc kubenswrapper[4680]: E0217 18:17:58.162795 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb6431e-7411-4f54-9b55-6ba091b50804" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.162944 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb6431e-7411-4f54-9b55-6ba091b50804" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.163242 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb6431e-7411-4f54-9b55-6ba091b50804" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.163972 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.171760 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.172049 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mhdwq" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.172194 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.172375 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.179178 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm"] Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.180131 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.240365 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-shzfm\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.240680 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-shzfm\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.240781 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-shzfm\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.240812 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-shzfm\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.240835 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7ngw\" (UniqueName: \"kubernetes.io/projected/c501ee0c-b4b4-4517-a946-acea84eaf4e9-kube-api-access-r7ngw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-shzfm\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.342402 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-shzfm\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.342450 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-shzfm\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.342572 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-shzfm\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.342635 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-shzfm\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.342659 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7ngw\" (UniqueName: \"kubernetes.io/projected/c501ee0c-b4b4-4517-a946-acea84eaf4e9-kube-api-access-r7ngw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-shzfm\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.343969 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-shzfm\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.348860 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-shzfm\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.349073 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-shzfm\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.353836 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-shzfm\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.362286 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7ngw\" (UniqueName: \"kubernetes.io/projected/c501ee0c-b4b4-4517-a946-acea84eaf4e9-kube-api-access-r7ngw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-shzfm\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:17:58 crc kubenswrapper[4680]: I0217 18:17:58.495286 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:17:59 crc kubenswrapper[4680]: I0217 18:17:59.029588 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm"] Feb 17 18:17:59 crc kubenswrapper[4680]: I0217 18:17:59.056025 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" event={"ID":"c501ee0c-b4b4-4517-a946-acea84eaf4e9","Type":"ContainerStarted","Data":"708157428ab5e42e57822e3e70a06ba3055554afc6894c18345eaa21cf96e90e"} Feb 17 18:18:00 crc kubenswrapper[4680]: I0217 18:18:00.065100 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" event={"ID":"c501ee0c-b4b4-4517-a946-acea84eaf4e9","Type":"ContainerStarted","Data":"317e0993fe81d7285e3c61c990ac409f678c3f0941961c3b9b64fd9e222bfd77"} Feb 17 18:18:00 crc kubenswrapper[4680]: I0217 18:18:00.087209 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" podStartSLOduration=1.602662918 podStartE2EDuration="2.087194852s" podCreationTimestamp="2026-02-17 18:17:58 +0000 UTC" firstStartedPulling="2026-02-17 18:17:59.03766249 +0000 UTC m=+2008.588561169" lastFinishedPulling="2026-02-17 18:17:59.522194424 +0000 UTC m=+2009.073093103" observedRunningTime="2026-02-17 18:18:00.078921542 +0000 UTC m=+2009.629820221" watchObservedRunningTime="2026-02-17 18:18:00.087194852 +0000 UTC m=+2009.638093531" Feb 17 18:18:05 crc kubenswrapper[4680]: I0217 18:18:05.588488 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:18:05 crc kubenswrapper[4680]: I0217 18:18:05.589557 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:18:05 crc kubenswrapper[4680]: I0217 18:18:05.589665 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 18:18:05 crc kubenswrapper[4680]: I0217 18:18:05.590546 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cd71dcda81999845ebec67c7f0fbf1d96faca4f692411c4e3318e7b0e0daed8f"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 18:18:05 crc kubenswrapper[4680]: I0217 18:18:05.590621 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://cd71dcda81999845ebec67c7f0fbf1d96faca4f692411c4e3318e7b0e0daed8f" gracePeriod=600 Feb 17 18:18:06 crc kubenswrapper[4680]: I0217 18:18:06.114735 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="cd71dcda81999845ebec67c7f0fbf1d96faca4f692411c4e3318e7b0e0daed8f" exitCode=0 Feb 17 18:18:06 crc kubenswrapper[4680]: I0217 18:18:06.114818 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"cd71dcda81999845ebec67c7f0fbf1d96faca4f692411c4e3318e7b0e0daed8f"} Feb 17 18:18:06 crc kubenswrapper[4680]: I0217 18:18:06.115655 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78"} Feb 17 18:18:06 crc kubenswrapper[4680]: I0217 18:18:06.115718 4680 scope.go:117] "RemoveContainer" containerID="3d8de076bce422cc1e91fb92ebdaf2197336fde15a42ede9bcc224d5d09c8ff1" Feb 17 18:19:00 crc kubenswrapper[4680]: I0217 18:19:00.590191 4680 generic.go:334] "Generic (PLEG): container finished" podID="c501ee0c-b4b4-4517-a946-acea84eaf4e9" containerID="317e0993fe81d7285e3c61c990ac409f678c3f0941961c3b9b64fd9e222bfd77" exitCode=0 Feb 17 18:19:00 crc kubenswrapper[4680]: I0217 18:19:00.590302 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" event={"ID":"c501ee0c-b4b4-4517-a946-acea84eaf4e9","Type":"ContainerDied","Data":"317e0993fe81d7285e3c61c990ac409f678c3f0941961c3b9b64fd9e222bfd77"} Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.009345 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.134572 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ssh-key-openstack-edpm-ipam\") pod \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.134634 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7ngw\" (UniqueName: \"kubernetes.io/projected/c501ee0c-b4b4-4517-a946-acea84eaf4e9-kube-api-access-r7ngw\") pod \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.134678 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ovn-combined-ca-bundle\") pod \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.134698 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ovncontroller-config-0\") pod \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.134875 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-inventory\") pod \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\" (UID: \"c501ee0c-b4b4-4517-a946-acea84eaf4e9\") " Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.141569 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c501ee0c-b4b4-4517-a946-acea84eaf4e9" (UID: "c501ee0c-b4b4-4517-a946-acea84eaf4e9"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.141604 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c501ee0c-b4b4-4517-a946-acea84eaf4e9-kube-api-access-r7ngw" (OuterVolumeSpecName: "kube-api-access-r7ngw") pod "c501ee0c-b4b4-4517-a946-acea84eaf4e9" (UID: "c501ee0c-b4b4-4517-a946-acea84eaf4e9"). InnerVolumeSpecName "kube-api-access-r7ngw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.163902 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c501ee0c-b4b4-4517-a946-acea84eaf4e9" (UID: "c501ee0c-b4b4-4517-a946-acea84eaf4e9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.164780 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-inventory" (OuterVolumeSpecName: "inventory") pod "c501ee0c-b4b4-4517-a946-acea84eaf4e9" (UID: "c501ee0c-b4b4-4517-a946-acea84eaf4e9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.165651 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "c501ee0c-b4b4-4517-a946-acea84eaf4e9" (UID: "c501ee0c-b4b4-4517-a946-acea84eaf4e9"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.236784 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.236827 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.236844 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7ngw\" (UniqueName: \"kubernetes.io/projected/c501ee0c-b4b4-4517-a946-acea84eaf4e9-kube-api-access-r7ngw\") on node \"crc\" DevicePath \"\"" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.236856 4680 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.236868 4680 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c501ee0c-b4b4-4517-a946-acea84eaf4e9-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.608590 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" event={"ID":"c501ee0c-b4b4-4517-a946-acea84eaf4e9","Type":"ContainerDied","Data":"708157428ab5e42e57822e3e70a06ba3055554afc6894c18345eaa21cf96e90e"} Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.608968 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="708157428ab5e42e57822e3e70a06ba3055554afc6894c18345eaa21cf96e90e" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.608672 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-shzfm" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.769009 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z"] Feb 17 18:19:02 crc kubenswrapper[4680]: E0217 18:19:02.769680 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c501ee0c-b4b4-4517-a946-acea84eaf4e9" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.769762 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c501ee0c-b4b4-4517-a946-acea84eaf4e9" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.770028 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="c501ee0c-b4b4-4517-a946-acea84eaf4e9" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.770862 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.778061 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.778157 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.778212 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.778247 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.778268 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.778048 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mhdwq" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.781392 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z"] Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.948392 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.948464 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.948576 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x44nx\" (UniqueName: \"kubernetes.io/projected/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-kube-api-access-x44nx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.948616 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.948646 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:02 crc kubenswrapper[4680]: I0217 18:19:02.948804 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:03 crc kubenswrapper[4680]: I0217 18:19:03.050120 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:03 crc kubenswrapper[4680]: I0217 18:19:03.050253 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:03 crc kubenswrapper[4680]: I0217 18:19:03.050302 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:03 crc kubenswrapper[4680]: I0217 18:19:03.050419 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:03 crc kubenswrapper[4680]: I0217 18:19:03.050482 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x44nx\" (UniqueName: \"kubernetes.io/projected/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-kube-api-access-x44nx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:03 crc kubenswrapper[4680]: I0217 18:19:03.050512 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:03 crc kubenswrapper[4680]: I0217 18:19:03.054743 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:03 crc kubenswrapper[4680]: I0217 18:19:03.054881 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:03 crc kubenswrapper[4680]: I0217 18:19:03.055104 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:03 crc kubenswrapper[4680]: I0217 18:19:03.055475 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:03 crc kubenswrapper[4680]: I0217 18:19:03.059567 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:03 crc kubenswrapper[4680]: I0217 18:19:03.072512 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x44nx\" (UniqueName: \"kubernetes.io/projected/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-kube-api-access-x44nx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:03 crc kubenswrapper[4680]: I0217 18:19:03.088559 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:03 crc kubenswrapper[4680]: I0217 18:19:03.624822 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z"] Feb 17 18:19:04 crc kubenswrapper[4680]: I0217 18:19:04.624232 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" event={"ID":"6966330b-ec2c-4a0d-a3b2-50c582e0eb69","Type":"ContainerStarted","Data":"946b510859e74056a38935e4b8fb254565185e3f0f81f847a38a841425d7fb6f"} Feb 17 18:19:04 crc kubenswrapper[4680]: I0217 18:19:04.624575 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" event={"ID":"6966330b-ec2c-4a0d-a3b2-50c582e0eb69","Type":"ContainerStarted","Data":"2f3f19be3fe45a7f66034d0f1d3b1418c35018ed9114901928bbf365a82f020e"} Feb 17 18:19:04 crc kubenswrapper[4680]: I0217 18:19:04.647436 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" podStartSLOduration=2.225440583 podStartE2EDuration="2.647406523s" podCreationTimestamp="2026-02-17 18:19:02 +0000 UTC" firstStartedPulling="2026-02-17 18:19:03.640647754 +0000 UTC m=+2073.191546433" lastFinishedPulling="2026-02-17 18:19:04.062613694 +0000 UTC m=+2073.613512373" observedRunningTime="2026-02-17 18:19:04.637896674 +0000 UTC m=+2074.188795353" watchObservedRunningTime="2026-02-17 18:19:04.647406523 +0000 UTC m=+2074.198305192" Feb 17 18:19:09 crc kubenswrapper[4680]: I0217 18:19:09.684520 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w5p28"] Feb 17 18:19:09 crc kubenswrapper[4680]: I0217 18:19:09.686890 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5p28" Feb 17 18:19:09 crc kubenswrapper[4680]: I0217 18:19:09.700372 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5p28"] Feb 17 18:19:09 crc kubenswrapper[4680]: I0217 18:19:09.770931 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7f8h\" (UniqueName: \"kubernetes.io/projected/eb5b272d-af47-4c09-ba67-9647e6cf277f-kube-api-access-z7f8h\") pod \"redhat-marketplace-w5p28\" (UID: \"eb5b272d-af47-4c09-ba67-9647e6cf277f\") " pod="openshift-marketplace/redhat-marketplace-w5p28" Feb 17 18:19:09 crc kubenswrapper[4680]: I0217 18:19:09.771096 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb5b272d-af47-4c09-ba67-9647e6cf277f-catalog-content\") pod \"redhat-marketplace-w5p28\" (UID: \"eb5b272d-af47-4c09-ba67-9647e6cf277f\") " pod="openshift-marketplace/redhat-marketplace-w5p28" Feb 17 18:19:09 crc kubenswrapper[4680]: I0217 18:19:09.771199 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb5b272d-af47-4c09-ba67-9647e6cf277f-utilities\") pod \"redhat-marketplace-w5p28\" (UID: \"eb5b272d-af47-4c09-ba67-9647e6cf277f\") " pod="openshift-marketplace/redhat-marketplace-w5p28" Feb 17 18:19:09 crc kubenswrapper[4680]: I0217 18:19:09.873212 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7f8h\" (UniqueName: \"kubernetes.io/projected/eb5b272d-af47-4c09-ba67-9647e6cf277f-kube-api-access-z7f8h\") pod \"redhat-marketplace-w5p28\" (UID: \"eb5b272d-af47-4c09-ba67-9647e6cf277f\") " pod="openshift-marketplace/redhat-marketplace-w5p28" Feb 17 18:19:09 crc kubenswrapper[4680]: I0217 18:19:09.873334 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb5b272d-af47-4c09-ba67-9647e6cf277f-catalog-content\") pod \"redhat-marketplace-w5p28\" (UID: \"eb5b272d-af47-4c09-ba67-9647e6cf277f\") " pod="openshift-marketplace/redhat-marketplace-w5p28" Feb 17 18:19:09 crc kubenswrapper[4680]: I0217 18:19:09.873395 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb5b272d-af47-4c09-ba67-9647e6cf277f-utilities\") pod \"redhat-marketplace-w5p28\" (UID: \"eb5b272d-af47-4c09-ba67-9647e6cf277f\") " pod="openshift-marketplace/redhat-marketplace-w5p28" Feb 17 18:19:09 crc kubenswrapper[4680]: I0217 18:19:09.873908 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb5b272d-af47-4c09-ba67-9647e6cf277f-utilities\") pod \"redhat-marketplace-w5p28\" (UID: \"eb5b272d-af47-4c09-ba67-9647e6cf277f\") " pod="openshift-marketplace/redhat-marketplace-w5p28" Feb 17 18:19:09 crc kubenswrapper[4680]: I0217 18:19:09.874101 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb5b272d-af47-4c09-ba67-9647e6cf277f-catalog-content\") pod \"redhat-marketplace-w5p28\" (UID: \"eb5b272d-af47-4c09-ba67-9647e6cf277f\") " pod="openshift-marketplace/redhat-marketplace-w5p28" Feb 17 18:19:09 crc kubenswrapper[4680]: I0217 18:19:09.904145 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7f8h\" (UniqueName: \"kubernetes.io/projected/eb5b272d-af47-4c09-ba67-9647e6cf277f-kube-api-access-z7f8h\") pod \"redhat-marketplace-w5p28\" (UID: \"eb5b272d-af47-4c09-ba67-9647e6cf277f\") " pod="openshift-marketplace/redhat-marketplace-w5p28" Feb 17 18:19:10 crc kubenswrapper[4680]: I0217 18:19:10.016505 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5p28" Feb 17 18:19:10 crc kubenswrapper[4680]: I0217 18:19:10.497444 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5p28"] Feb 17 18:19:10 crc kubenswrapper[4680]: I0217 18:19:10.666933 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5p28" event={"ID":"eb5b272d-af47-4c09-ba67-9647e6cf277f","Type":"ContainerStarted","Data":"49c482aaa6dc61e619c5bce038844896710309bc199b3ec67800a08a49b220f0"} Feb 17 18:19:11 crc kubenswrapper[4680]: I0217 18:19:11.677936 4680 generic.go:334] "Generic (PLEG): container finished" podID="eb5b272d-af47-4c09-ba67-9647e6cf277f" containerID="254b2ac95d69b3248fb9b1175ce1ba07b9d3de42f616601288de944af7308ac6" exitCode=0 Feb 17 18:19:11 crc kubenswrapper[4680]: I0217 18:19:11.678054 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5p28" event={"ID":"eb5b272d-af47-4c09-ba67-9647e6cf277f","Type":"ContainerDied","Data":"254b2ac95d69b3248fb9b1175ce1ba07b9d3de42f616601288de944af7308ac6"} Feb 17 18:19:12 crc kubenswrapper[4680]: I0217 18:19:12.687657 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5p28" event={"ID":"eb5b272d-af47-4c09-ba67-9647e6cf277f","Type":"ContainerStarted","Data":"44d01fa6aaccc639eb2816ecf1c94473965421c73465ff085640a3b501ba1521"} Feb 17 18:19:13 crc kubenswrapper[4680]: I0217 18:19:13.700679 4680 generic.go:334] "Generic (PLEG): container finished" podID="eb5b272d-af47-4c09-ba67-9647e6cf277f" containerID="44d01fa6aaccc639eb2816ecf1c94473965421c73465ff085640a3b501ba1521" exitCode=0 Feb 17 18:19:13 crc kubenswrapper[4680]: I0217 18:19:13.700743 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5p28" event={"ID":"eb5b272d-af47-4c09-ba67-9647e6cf277f","Type":"ContainerDied","Data":"44d01fa6aaccc639eb2816ecf1c94473965421c73465ff085640a3b501ba1521"} Feb 17 18:19:14 crc kubenswrapper[4680]: I0217 18:19:14.711156 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5p28" event={"ID":"eb5b272d-af47-4c09-ba67-9647e6cf277f","Type":"ContainerStarted","Data":"8f76d83a94168f45eb0b34e954ff1417f898ac19d6097c55105ddc2b5fd4364f"} Feb 17 18:19:14 crc kubenswrapper[4680]: I0217 18:19:14.734046 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w5p28" podStartSLOduration=3.293927327 podStartE2EDuration="5.73402781s" podCreationTimestamp="2026-02-17 18:19:09 +0000 UTC" firstStartedPulling="2026-02-17 18:19:11.680667225 +0000 UTC m=+2081.231565904" lastFinishedPulling="2026-02-17 18:19:14.120767708 +0000 UTC m=+2083.671666387" observedRunningTime="2026-02-17 18:19:14.727746583 +0000 UTC m=+2084.278645262" watchObservedRunningTime="2026-02-17 18:19:14.73402781 +0000 UTC m=+2084.284926489" Feb 17 18:19:20 crc kubenswrapper[4680]: I0217 18:19:20.017242 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w5p28" Feb 17 18:19:20 crc kubenswrapper[4680]: I0217 18:19:20.018435 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w5p28" Feb 17 18:19:20 crc kubenswrapper[4680]: I0217 18:19:20.064472 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w5p28" Feb 17 18:19:20 crc kubenswrapper[4680]: I0217 18:19:20.803590 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w5p28" Feb 17 18:19:20 crc kubenswrapper[4680]: I0217 18:19:20.856077 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5p28"] Feb 17 18:19:22 crc kubenswrapper[4680]: I0217 18:19:22.772461 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w5p28" podUID="eb5b272d-af47-4c09-ba67-9647e6cf277f" containerName="registry-server" containerID="cri-o://8f76d83a94168f45eb0b34e954ff1417f898ac19d6097c55105ddc2b5fd4364f" gracePeriod=2 Feb 17 18:19:23 crc kubenswrapper[4680]: I0217 18:19:23.792288 4680 generic.go:334] "Generic (PLEG): container finished" podID="eb5b272d-af47-4c09-ba67-9647e6cf277f" containerID="8f76d83a94168f45eb0b34e954ff1417f898ac19d6097c55105ddc2b5fd4364f" exitCode=0 Feb 17 18:19:23 crc kubenswrapper[4680]: I0217 18:19:23.792688 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5p28" event={"ID":"eb5b272d-af47-4c09-ba67-9647e6cf277f","Type":"ContainerDied","Data":"8f76d83a94168f45eb0b34e954ff1417f898ac19d6097c55105ddc2b5fd4364f"} Feb 17 18:19:23 crc kubenswrapper[4680]: I0217 18:19:23.923250 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5p28" Feb 17 18:19:24 crc kubenswrapper[4680]: I0217 18:19:24.037532 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb5b272d-af47-4c09-ba67-9647e6cf277f-utilities\") pod \"eb5b272d-af47-4c09-ba67-9647e6cf277f\" (UID: \"eb5b272d-af47-4c09-ba67-9647e6cf277f\") " Feb 17 18:19:24 crc kubenswrapper[4680]: I0217 18:19:24.038151 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb5b272d-af47-4c09-ba67-9647e6cf277f-catalog-content\") pod \"eb5b272d-af47-4c09-ba67-9647e6cf277f\" (UID: \"eb5b272d-af47-4c09-ba67-9647e6cf277f\") " Feb 17 18:19:24 crc kubenswrapper[4680]: I0217 18:19:24.038297 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7f8h\" (UniqueName: \"kubernetes.io/projected/eb5b272d-af47-4c09-ba67-9647e6cf277f-kube-api-access-z7f8h\") pod \"eb5b272d-af47-4c09-ba67-9647e6cf277f\" (UID: \"eb5b272d-af47-4c09-ba67-9647e6cf277f\") " Feb 17 18:19:24 crc kubenswrapper[4680]: I0217 18:19:24.038561 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb5b272d-af47-4c09-ba67-9647e6cf277f-utilities" (OuterVolumeSpecName: "utilities") pod "eb5b272d-af47-4c09-ba67-9647e6cf277f" (UID: "eb5b272d-af47-4c09-ba67-9647e6cf277f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:19:24 crc kubenswrapper[4680]: I0217 18:19:24.039130 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb5b272d-af47-4c09-ba67-9647e6cf277f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:19:24 crc kubenswrapper[4680]: I0217 18:19:24.049726 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb5b272d-af47-4c09-ba67-9647e6cf277f-kube-api-access-z7f8h" (OuterVolumeSpecName: "kube-api-access-z7f8h") pod "eb5b272d-af47-4c09-ba67-9647e6cf277f" (UID: "eb5b272d-af47-4c09-ba67-9647e6cf277f"). InnerVolumeSpecName "kube-api-access-z7f8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:19:24 crc kubenswrapper[4680]: I0217 18:19:24.069550 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb5b272d-af47-4c09-ba67-9647e6cf277f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb5b272d-af47-4c09-ba67-9647e6cf277f" (UID: "eb5b272d-af47-4c09-ba67-9647e6cf277f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:19:24 crc kubenswrapper[4680]: I0217 18:19:24.140852 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb5b272d-af47-4c09-ba67-9647e6cf277f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:19:24 crc kubenswrapper[4680]: I0217 18:19:24.140902 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7f8h\" (UniqueName: \"kubernetes.io/projected/eb5b272d-af47-4c09-ba67-9647e6cf277f-kube-api-access-z7f8h\") on node \"crc\" DevicePath \"\"" Feb 17 18:19:24 crc kubenswrapper[4680]: I0217 18:19:24.804429 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5p28" event={"ID":"eb5b272d-af47-4c09-ba67-9647e6cf277f","Type":"ContainerDied","Data":"49c482aaa6dc61e619c5bce038844896710309bc199b3ec67800a08a49b220f0"} Feb 17 18:19:24 crc kubenswrapper[4680]: I0217 18:19:24.804486 4680 scope.go:117] "RemoveContainer" containerID="8f76d83a94168f45eb0b34e954ff1417f898ac19d6097c55105ddc2b5fd4364f" Feb 17 18:19:24 crc kubenswrapper[4680]: I0217 18:19:24.804632 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5p28" Feb 17 18:19:24 crc kubenswrapper[4680]: I0217 18:19:24.829243 4680 scope.go:117] "RemoveContainer" containerID="44d01fa6aaccc639eb2816ecf1c94473965421c73465ff085640a3b501ba1521" Feb 17 18:19:24 crc kubenswrapper[4680]: I0217 18:19:24.842203 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5p28"] Feb 17 18:19:24 crc kubenswrapper[4680]: I0217 18:19:24.850973 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5p28"] Feb 17 18:19:24 crc kubenswrapper[4680]: I0217 18:19:24.865402 4680 scope.go:117] "RemoveContainer" containerID="254b2ac95d69b3248fb9b1175ce1ba07b9d3de42f616601288de944af7308ac6" Feb 17 18:19:25 crc kubenswrapper[4680]: I0217 18:19:25.117871 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb5b272d-af47-4c09-ba67-9647e6cf277f" path="/var/lib/kubelet/pods/eb5b272d-af47-4c09-ba67-9647e6cf277f/volumes" Feb 17 18:19:26 crc kubenswrapper[4680]: I0217 18:19:26.166258 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4bm4p"] Feb 17 18:19:26 crc kubenswrapper[4680]: E0217 18:19:26.166903 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb5b272d-af47-4c09-ba67-9647e6cf277f" containerName="registry-server" Feb 17 18:19:26 crc kubenswrapper[4680]: I0217 18:19:26.166919 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb5b272d-af47-4c09-ba67-9647e6cf277f" containerName="registry-server" Feb 17 18:19:26 crc kubenswrapper[4680]: E0217 18:19:26.166940 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb5b272d-af47-4c09-ba67-9647e6cf277f" containerName="extract-utilities" Feb 17 18:19:26 crc kubenswrapper[4680]: I0217 18:19:26.166947 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb5b272d-af47-4c09-ba67-9647e6cf277f" containerName="extract-utilities" Feb 17 18:19:26 crc kubenswrapper[4680]: E0217 18:19:26.166974 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb5b272d-af47-4c09-ba67-9647e6cf277f" containerName="extract-content" Feb 17 18:19:26 crc kubenswrapper[4680]: I0217 18:19:26.166983 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb5b272d-af47-4c09-ba67-9647e6cf277f" containerName="extract-content" Feb 17 18:19:26 crc kubenswrapper[4680]: I0217 18:19:26.167187 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb5b272d-af47-4c09-ba67-9647e6cf277f" containerName="registry-server" Feb 17 18:19:26 crc kubenswrapper[4680]: I0217 18:19:26.168754 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4bm4p" Feb 17 18:19:26 crc kubenswrapper[4680]: I0217 18:19:26.182215 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4bm4p"] Feb 17 18:19:26 crc kubenswrapper[4680]: I0217 18:19:26.279115 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f52gt\" (UniqueName: \"kubernetes.io/projected/7ad77fd2-6a8f-4a93-b060-bac1174b362d-kube-api-access-f52gt\") pod \"community-operators-4bm4p\" (UID: \"7ad77fd2-6a8f-4a93-b060-bac1174b362d\") " pod="openshift-marketplace/community-operators-4bm4p" Feb 17 18:19:26 crc kubenswrapper[4680]: I0217 18:19:26.279582 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad77fd2-6a8f-4a93-b060-bac1174b362d-utilities\") pod \"community-operators-4bm4p\" (UID: \"7ad77fd2-6a8f-4a93-b060-bac1174b362d\") " pod="openshift-marketplace/community-operators-4bm4p" Feb 17 18:19:26 crc kubenswrapper[4680]: I0217 18:19:26.279724 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad77fd2-6a8f-4a93-b060-bac1174b362d-catalog-content\") pod \"community-operators-4bm4p\" (UID: \"7ad77fd2-6a8f-4a93-b060-bac1174b362d\") " pod="openshift-marketplace/community-operators-4bm4p" Feb 17 18:19:26 crc kubenswrapper[4680]: I0217 18:19:26.381628 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad77fd2-6a8f-4a93-b060-bac1174b362d-utilities\") pod \"community-operators-4bm4p\" (UID: \"7ad77fd2-6a8f-4a93-b060-bac1174b362d\") " pod="openshift-marketplace/community-operators-4bm4p" Feb 17 18:19:26 crc kubenswrapper[4680]: I0217 18:19:26.381732 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad77fd2-6a8f-4a93-b060-bac1174b362d-catalog-content\") pod \"community-operators-4bm4p\" (UID: \"7ad77fd2-6a8f-4a93-b060-bac1174b362d\") " pod="openshift-marketplace/community-operators-4bm4p" Feb 17 18:19:26 crc kubenswrapper[4680]: I0217 18:19:26.381787 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f52gt\" (UniqueName: \"kubernetes.io/projected/7ad77fd2-6a8f-4a93-b060-bac1174b362d-kube-api-access-f52gt\") pod \"community-operators-4bm4p\" (UID: \"7ad77fd2-6a8f-4a93-b060-bac1174b362d\") " pod="openshift-marketplace/community-operators-4bm4p" Feb 17 18:19:26 crc kubenswrapper[4680]: I0217 18:19:26.382088 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad77fd2-6a8f-4a93-b060-bac1174b362d-utilities\") pod \"community-operators-4bm4p\" (UID: \"7ad77fd2-6a8f-4a93-b060-bac1174b362d\") " pod="openshift-marketplace/community-operators-4bm4p" Feb 17 18:19:26 crc kubenswrapper[4680]: I0217 18:19:26.382534 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad77fd2-6a8f-4a93-b060-bac1174b362d-catalog-content\") pod \"community-operators-4bm4p\" (UID: \"7ad77fd2-6a8f-4a93-b060-bac1174b362d\") " pod="openshift-marketplace/community-operators-4bm4p" Feb 17 18:19:26 crc kubenswrapper[4680]: I0217 18:19:26.401765 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f52gt\" (UniqueName: \"kubernetes.io/projected/7ad77fd2-6a8f-4a93-b060-bac1174b362d-kube-api-access-f52gt\") pod \"community-operators-4bm4p\" (UID: \"7ad77fd2-6a8f-4a93-b060-bac1174b362d\") " pod="openshift-marketplace/community-operators-4bm4p" Feb 17 18:19:26 crc kubenswrapper[4680]: I0217 18:19:26.491009 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4bm4p" Feb 17 18:19:27 crc kubenswrapper[4680]: I0217 18:19:27.074861 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4bm4p"] Feb 17 18:19:27 crc kubenswrapper[4680]: W0217 18:19:27.078708 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ad77fd2_6a8f_4a93_b060_bac1174b362d.slice/crio-6f034fb61f05926c340f17cea86095284ef8c244a2565254488c3e07013e902a WatchSource:0}: Error finding container 6f034fb61f05926c340f17cea86095284ef8c244a2565254488c3e07013e902a: Status 404 returned error can't find the container with id 6f034fb61f05926c340f17cea86095284ef8c244a2565254488c3e07013e902a Feb 17 18:19:27 crc kubenswrapper[4680]: I0217 18:19:27.890423 4680 generic.go:334] "Generic (PLEG): container finished" podID="7ad77fd2-6a8f-4a93-b060-bac1174b362d" containerID="a0dc91b2a38a1f1a1685a3e3d5625c00af6d6c4a51e81ead13cba48bdab82113" exitCode=0 Feb 17 18:19:27 crc kubenswrapper[4680]: I0217 18:19:27.890530 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bm4p" event={"ID":"7ad77fd2-6a8f-4a93-b060-bac1174b362d","Type":"ContainerDied","Data":"a0dc91b2a38a1f1a1685a3e3d5625c00af6d6c4a51e81ead13cba48bdab82113"} Feb 17 18:19:27 crc kubenswrapper[4680]: I0217 18:19:27.890722 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bm4p" event={"ID":"7ad77fd2-6a8f-4a93-b060-bac1174b362d","Type":"ContainerStarted","Data":"6f034fb61f05926c340f17cea86095284ef8c244a2565254488c3e07013e902a"} Feb 17 18:19:28 crc kubenswrapper[4680]: I0217 18:19:28.900168 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bm4p" event={"ID":"7ad77fd2-6a8f-4a93-b060-bac1174b362d","Type":"ContainerStarted","Data":"3ebdaddccdad5fd2beac3e47c009b800cebc456c12a149ba330f51b97798413e"} Feb 17 18:19:30 crc kubenswrapper[4680]: I0217 18:19:30.918909 4680 generic.go:334] "Generic (PLEG): container finished" podID="7ad77fd2-6a8f-4a93-b060-bac1174b362d" containerID="3ebdaddccdad5fd2beac3e47c009b800cebc456c12a149ba330f51b97798413e" exitCode=0 Feb 17 18:19:30 crc kubenswrapper[4680]: I0217 18:19:30.918996 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bm4p" event={"ID":"7ad77fd2-6a8f-4a93-b060-bac1174b362d","Type":"ContainerDied","Data":"3ebdaddccdad5fd2beac3e47c009b800cebc456c12a149ba330f51b97798413e"} Feb 17 18:19:31 crc kubenswrapper[4680]: I0217 18:19:31.930366 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bm4p" event={"ID":"7ad77fd2-6a8f-4a93-b060-bac1174b362d","Type":"ContainerStarted","Data":"a835e9f19cf42a7354c8538d0b9deb66d5fa2b7569aa9634bf2cb1c4b6552460"} Feb 17 18:19:36 crc kubenswrapper[4680]: I0217 18:19:36.492056 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4bm4p" Feb 17 18:19:36 crc kubenswrapper[4680]: I0217 18:19:36.492748 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4bm4p" Feb 17 18:19:36 crc kubenswrapper[4680]: I0217 18:19:36.539287 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4bm4p" Feb 17 18:19:36 crc kubenswrapper[4680]: I0217 18:19:36.562829 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4bm4p" podStartSLOduration=7.15681398 podStartE2EDuration="10.56279997s" podCreationTimestamp="2026-02-17 18:19:26 +0000 UTC" firstStartedPulling="2026-02-17 18:19:27.891925985 +0000 UTC m=+2097.442824664" lastFinishedPulling="2026-02-17 18:19:31.297911975 +0000 UTC m=+2100.848810654" observedRunningTime="2026-02-17 18:19:31.953501374 +0000 UTC m=+2101.504400073" watchObservedRunningTime="2026-02-17 18:19:36.56279997 +0000 UTC m=+2106.113698649" Feb 17 18:19:37 crc kubenswrapper[4680]: I0217 18:19:37.018461 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4bm4p" Feb 17 18:19:37 crc kubenswrapper[4680]: I0217 18:19:37.078999 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4bm4p"] Feb 17 18:19:38 crc kubenswrapper[4680]: I0217 18:19:38.984730 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4bm4p" podUID="7ad77fd2-6a8f-4a93-b060-bac1174b362d" containerName="registry-server" containerID="cri-o://a835e9f19cf42a7354c8538d0b9deb66d5fa2b7569aa9634bf2cb1c4b6552460" gracePeriod=2 Feb 17 18:19:39 crc kubenswrapper[4680]: I0217 18:19:39.632681 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4bm4p" Feb 17 18:19:39 crc kubenswrapper[4680]: I0217 18:19:39.791297 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f52gt\" (UniqueName: \"kubernetes.io/projected/7ad77fd2-6a8f-4a93-b060-bac1174b362d-kube-api-access-f52gt\") pod \"7ad77fd2-6a8f-4a93-b060-bac1174b362d\" (UID: \"7ad77fd2-6a8f-4a93-b060-bac1174b362d\") " Feb 17 18:19:39 crc kubenswrapper[4680]: I0217 18:19:39.791395 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad77fd2-6a8f-4a93-b060-bac1174b362d-utilities\") pod \"7ad77fd2-6a8f-4a93-b060-bac1174b362d\" (UID: \"7ad77fd2-6a8f-4a93-b060-bac1174b362d\") " Feb 17 18:19:39 crc kubenswrapper[4680]: I0217 18:19:39.791627 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad77fd2-6a8f-4a93-b060-bac1174b362d-catalog-content\") pod \"7ad77fd2-6a8f-4a93-b060-bac1174b362d\" (UID: \"7ad77fd2-6a8f-4a93-b060-bac1174b362d\") " Feb 17 18:19:39 crc kubenswrapper[4680]: I0217 18:19:39.792755 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ad77fd2-6a8f-4a93-b060-bac1174b362d-utilities" (OuterVolumeSpecName: "utilities") pod "7ad77fd2-6a8f-4a93-b060-bac1174b362d" (UID: "7ad77fd2-6a8f-4a93-b060-bac1174b362d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:19:39 crc kubenswrapper[4680]: I0217 18:19:39.809535 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ad77fd2-6a8f-4a93-b060-bac1174b362d-kube-api-access-f52gt" (OuterVolumeSpecName: "kube-api-access-f52gt") pod "7ad77fd2-6a8f-4a93-b060-bac1174b362d" (UID: "7ad77fd2-6a8f-4a93-b060-bac1174b362d"). InnerVolumeSpecName "kube-api-access-f52gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:19:39 crc kubenswrapper[4680]: I0217 18:19:39.860444 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ad77fd2-6a8f-4a93-b060-bac1174b362d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ad77fd2-6a8f-4a93-b060-bac1174b362d" (UID: "7ad77fd2-6a8f-4a93-b060-bac1174b362d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:19:39 crc kubenswrapper[4680]: I0217 18:19:39.895528 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad77fd2-6a8f-4a93-b060-bac1174b362d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:19:39 crc kubenswrapper[4680]: I0217 18:19:39.895559 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f52gt\" (UniqueName: \"kubernetes.io/projected/7ad77fd2-6a8f-4a93-b060-bac1174b362d-kube-api-access-f52gt\") on node \"crc\" DevicePath \"\"" Feb 17 18:19:39 crc kubenswrapper[4680]: I0217 18:19:39.895570 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad77fd2-6a8f-4a93-b060-bac1174b362d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:19:39 crc kubenswrapper[4680]: I0217 18:19:39.992955 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4bm4p" Feb 17 18:19:39 crc kubenswrapper[4680]: I0217 18:19:39.993014 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bm4p" event={"ID":"7ad77fd2-6a8f-4a93-b060-bac1174b362d","Type":"ContainerDied","Data":"a835e9f19cf42a7354c8538d0b9deb66d5fa2b7569aa9634bf2cb1c4b6552460"} Feb 17 18:19:39 crc kubenswrapper[4680]: I0217 18:19:39.993336 4680 scope.go:117] "RemoveContainer" containerID="a835e9f19cf42a7354c8538d0b9deb66d5fa2b7569aa9634bf2cb1c4b6552460" Feb 17 18:19:39 crc kubenswrapper[4680]: I0217 18:19:39.992880 4680 generic.go:334] "Generic (PLEG): container finished" podID="7ad77fd2-6a8f-4a93-b060-bac1174b362d" containerID="a835e9f19cf42a7354c8538d0b9deb66d5fa2b7569aa9634bf2cb1c4b6552460" exitCode=0 Feb 17 18:19:39 crc kubenswrapper[4680]: I0217 18:19:39.993568 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4bm4p" event={"ID":"7ad77fd2-6a8f-4a93-b060-bac1174b362d","Type":"ContainerDied","Data":"6f034fb61f05926c340f17cea86095284ef8c244a2565254488c3e07013e902a"} Feb 17 18:19:40 crc kubenswrapper[4680]: I0217 18:19:40.016090 4680 scope.go:117] "RemoveContainer" containerID="3ebdaddccdad5fd2beac3e47c009b800cebc456c12a149ba330f51b97798413e" Feb 17 18:19:40 crc kubenswrapper[4680]: I0217 18:19:40.033338 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4bm4p"] Feb 17 18:19:40 crc kubenswrapper[4680]: I0217 18:19:40.044766 4680 scope.go:117] "RemoveContainer" containerID="a0dc91b2a38a1f1a1685a3e3d5625c00af6d6c4a51e81ead13cba48bdab82113" Feb 17 18:19:40 crc kubenswrapper[4680]: I0217 18:19:40.053875 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4bm4p"] Feb 17 18:19:40 crc kubenswrapper[4680]: I0217 18:19:40.096631 4680 scope.go:117] "RemoveContainer" containerID="a835e9f19cf42a7354c8538d0b9deb66d5fa2b7569aa9634bf2cb1c4b6552460" Feb 17 18:19:40 crc kubenswrapper[4680]: E0217 18:19:40.097043 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a835e9f19cf42a7354c8538d0b9deb66d5fa2b7569aa9634bf2cb1c4b6552460\": container with ID starting with a835e9f19cf42a7354c8538d0b9deb66d5fa2b7569aa9634bf2cb1c4b6552460 not found: ID does not exist" containerID="a835e9f19cf42a7354c8538d0b9deb66d5fa2b7569aa9634bf2cb1c4b6552460" Feb 17 18:19:40 crc kubenswrapper[4680]: I0217 18:19:40.097069 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a835e9f19cf42a7354c8538d0b9deb66d5fa2b7569aa9634bf2cb1c4b6552460"} err="failed to get container status \"a835e9f19cf42a7354c8538d0b9deb66d5fa2b7569aa9634bf2cb1c4b6552460\": rpc error: code = NotFound desc = could not find container \"a835e9f19cf42a7354c8538d0b9deb66d5fa2b7569aa9634bf2cb1c4b6552460\": container with ID starting with a835e9f19cf42a7354c8538d0b9deb66d5fa2b7569aa9634bf2cb1c4b6552460 not found: ID does not exist" Feb 17 18:19:40 crc kubenswrapper[4680]: I0217 18:19:40.097094 4680 scope.go:117] "RemoveContainer" containerID="3ebdaddccdad5fd2beac3e47c009b800cebc456c12a149ba330f51b97798413e" Feb 17 18:19:40 crc kubenswrapper[4680]: E0217 18:19:40.098387 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ebdaddccdad5fd2beac3e47c009b800cebc456c12a149ba330f51b97798413e\": container with ID starting with 3ebdaddccdad5fd2beac3e47c009b800cebc456c12a149ba330f51b97798413e not found: ID does not exist" containerID="3ebdaddccdad5fd2beac3e47c009b800cebc456c12a149ba330f51b97798413e" Feb 17 18:19:40 crc kubenswrapper[4680]: I0217 18:19:40.098423 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ebdaddccdad5fd2beac3e47c009b800cebc456c12a149ba330f51b97798413e"} err="failed to get container status \"3ebdaddccdad5fd2beac3e47c009b800cebc456c12a149ba330f51b97798413e\": rpc error: code = NotFound desc = could not find container \"3ebdaddccdad5fd2beac3e47c009b800cebc456c12a149ba330f51b97798413e\": container with ID starting with 3ebdaddccdad5fd2beac3e47c009b800cebc456c12a149ba330f51b97798413e not found: ID does not exist" Feb 17 18:19:40 crc kubenswrapper[4680]: I0217 18:19:40.098468 4680 scope.go:117] "RemoveContainer" containerID="a0dc91b2a38a1f1a1685a3e3d5625c00af6d6c4a51e81ead13cba48bdab82113" Feb 17 18:19:40 crc kubenswrapper[4680]: E0217 18:19:40.098949 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0dc91b2a38a1f1a1685a3e3d5625c00af6d6c4a51e81ead13cba48bdab82113\": container with ID starting with a0dc91b2a38a1f1a1685a3e3d5625c00af6d6c4a51e81ead13cba48bdab82113 not found: ID does not exist" containerID="a0dc91b2a38a1f1a1685a3e3d5625c00af6d6c4a51e81ead13cba48bdab82113" Feb 17 18:19:40 crc kubenswrapper[4680]: I0217 18:19:40.098992 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0dc91b2a38a1f1a1685a3e3d5625c00af6d6c4a51e81ead13cba48bdab82113"} err="failed to get container status \"a0dc91b2a38a1f1a1685a3e3d5625c00af6d6c4a51e81ead13cba48bdab82113\": rpc error: code = NotFound desc = could not find container \"a0dc91b2a38a1f1a1685a3e3d5625c00af6d6c4a51e81ead13cba48bdab82113\": container with ID starting with a0dc91b2a38a1f1a1685a3e3d5625c00af6d6c4a51e81ead13cba48bdab82113 not found: ID does not exist" Feb 17 18:19:41 crc kubenswrapper[4680]: I0217 18:19:41.118704 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ad77fd2-6a8f-4a93-b060-bac1174b362d" path="/var/lib/kubelet/pods/7ad77fd2-6a8f-4a93-b060-bac1174b362d/volumes" Feb 17 18:19:52 crc kubenswrapper[4680]: I0217 18:19:52.089602 4680 generic.go:334] "Generic (PLEG): container finished" podID="6966330b-ec2c-4a0d-a3b2-50c582e0eb69" containerID="946b510859e74056a38935e4b8fb254565185e3f0f81f847a38a841425d7fb6f" exitCode=0 Feb 17 18:19:52 crc kubenswrapper[4680]: I0217 18:19:52.089691 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" event={"ID":"6966330b-ec2c-4a0d-a3b2-50c582e0eb69","Type":"ContainerDied","Data":"946b510859e74056a38935e4b8fb254565185e3f0f81f847a38a841425d7fb6f"} Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.471559 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.583795 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-inventory\") pod \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.583874 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x44nx\" (UniqueName: \"kubernetes.io/projected/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-kube-api-access-x44nx\") pod \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.583967 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-ssh-key-openstack-edpm-ipam\") pod \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.584074 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-nova-metadata-neutron-config-0\") pod \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.584179 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-neutron-metadata-combined-ca-bundle\") pod \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.584225 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-neutron-ovn-metadata-agent-neutron-config-0\") pod \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\" (UID: \"6966330b-ec2c-4a0d-a3b2-50c582e0eb69\") " Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.590646 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "6966330b-ec2c-4a0d-a3b2-50c582e0eb69" (UID: "6966330b-ec2c-4a0d-a3b2-50c582e0eb69"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.590830 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-kube-api-access-x44nx" (OuterVolumeSpecName: "kube-api-access-x44nx") pod "6966330b-ec2c-4a0d-a3b2-50c582e0eb69" (UID: "6966330b-ec2c-4a0d-a3b2-50c582e0eb69"). InnerVolumeSpecName "kube-api-access-x44nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.613874 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "6966330b-ec2c-4a0d-a3b2-50c582e0eb69" (UID: "6966330b-ec2c-4a0d-a3b2-50c582e0eb69"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.614409 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6966330b-ec2c-4a0d-a3b2-50c582e0eb69" (UID: "6966330b-ec2c-4a0d-a3b2-50c582e0eb69"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.614967 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-inventory" (OuterVolumeSpecName: "inventory") pod "6966330b-ec2c-4a0d-a3b2-50c582e0eb69" (UID: "6966330b-ec2c-4a0d-a3b2-50c582e0eb69"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.616403 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "6966330b-ec2c-4a0d-a3b2-50c582e0eb69" (UID: "6966330b-ec2c-4a0d-a3b2-50c582e0eb69"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.687012 4680 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.687216 4680 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.687295 4680 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.687380 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.687505 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x44nx\" (UniqueName: \"kubernetes.io/projected/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-kube-api-access-x44nx\") on node \"crc\" DevicePath \"\"" Feb 17 18:19:53 crc kubenswrapper[4680]: I0217 18:19:53.687565 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6966330b-ec2c-4a0d-a3b2-50c582e0eb69-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.105611 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" event={"ID":"6966330b-ec2c-4a0d-a3b2-50c582e0eb69","Type":"ContainerDied","Data":"2f3f19be3fe45a7f66034d0f1d3b1418c35018ed9114901928bbf365a82f020e"} Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.105654 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f3f19be3fe45a7f66034d0f1d3b1418c35018ed9114901928bbf365a82f020e" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.105942 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.205761 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s"] Feb 17 18:19:54 crc kubenswrapper[4680]: E0217 18:19:54.206162 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6966330b-ec2c-4a0d-a3b2-50c582e0eb69" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.206185 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6966330b-ec2c-4a0d-a3b2-50c582e0eb69" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 17 18:19:54 crc kubenswrapper[4680]: E0217 18:19:54.206212 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad77fd2-6a8f-4a93-b060-bac1174b362d" containerName="extract-utilities" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.206223 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad77fd2-6a8f-4a93-b060-bac1174b362d" containerName="extract-utilities" Feb 17 18:19:54 crc kubenswrapper[4680]: E0217 18:19:54.206235 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad77fd2-6a8f-4a93-b060-bac1174b362d" containerName="registry-server" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.206243 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad77fd2-6a8f-4a93-b060-bac1174b362d" containerName="registry-server" Feb 17 18:19:54 crc kubenswrapper[4680]: E0217 18:19:54.206294 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad77fd2-6a8f-4a93-b060-bac1174b362d" containerName="extract-content" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.206304 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad77fd2-6a8f-4a93-b060-bac1174b362d" containerName="extract-content" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.206504 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ad77fd2-6a8f-4a93-b060-bac1174b362d" containerName="registry-server" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.206532 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="6966330b-ec2c-4a0d-a3b2-50c582e0eb69" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.207204 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.210906 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.214745 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mhdwq" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.214986 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.215109 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.215151 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.230409 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s"] Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.401599 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5nxf\" (UniqueName: \"kubernetes.io/projected/d1713773-5461-439e-8d37-5c1f2b54db0a-kube-api-access-x5nxf\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-65z5s\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.401674 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-65z5s\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.402574 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-65z5s\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.402698 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-65z5s\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.402801 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-65z5s\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.504701 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-65z5s\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.505110 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-65z5s\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.505614 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-65z5s\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.505670 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5nxf\" (UniqueName: \"kubernetes.io/projected/d1713773-5461-439e-8d37-5c1f2b54db0a-kube-api-access-x5nxf\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-65z5s\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.505762 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-65z5s\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.511059 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-65z5s\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.511729 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-65z5s\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.512348 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-65z5s\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.523861 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-65z5s\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.524546 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5nxf\" (UniqueName: \"kubernetes.io/projected/d1713773-5461-439e-8d37-5c1f2b54db0a-kube-api-access-x5nxf\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-65z5s\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:19:54 crc kubenswrapper[4680]: I0217 18:19:54.526932 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:19:55 crc kubenswrapper[4680]: I0217 18:19:55.056054 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s"] Feb 17 18:19:55 crc kubenswrapper[4680]: I0217 18:19:55.133340 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" event={"ID":"d1713773-5461-439e-8d37-5c1f2b54db0a","Type":"ContainerStarted","Data":"6f3953459cad3b08c45f1db918d00c3e8a7663a54a4ac9df3d16c2cdaaa09423"} Feb 17 18:19:56 crc kubenswrapper[4680]: I0217 18:19:56.142091 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" event={"ID":"d1713773-5461-439e-8d37-5c1f2b54db0a","Type":"ContainerStarted","Data":"b4388965afc8478c1d2059167a99a89be760ec5a36de37a6b470de8120070ad6"} Feb 17 18:19:56 crc kubenswrapper[4680]: I0217 18:19:56.161177 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" podStartSLOduration=1.7442276749999999 podStartE2EDuration="2.161161467s" podCreationTimestamp="2026-02-17 18:19:54 +0000 UTC" firstStartedPulling="2026-02-17 18:19:55.059943645 +0000 UTC m=+2124.610842324" lastFinishedPulling="2026-02-17 18:19:55.476877437 +0000 UTC m=+2125.027776116" observedRunningTime="2026-02-17 18:19:56.159448424 +0000 UTC m=+2125.710347103" watchObservedRunningTime="2026-02-17 18:19:56.161161467 +0000 UTC m=+2125.712060146" Feb 17 18:20:05 crc kubenswrapper[4680]: I0217 18:20:05.589061 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:20:05 crc kubenswrapper[4680]: I0217 18:20:05.589588 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:20:35 crc kubenswrapper[4680]: I0217 18:20:35.588890 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:20:35 crc kubenswrapper[4680]: I0217 18:20:35.589572 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:21:05 crc kubenswrapper[4680]: I0217 18:21:05.589090 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:21:05 crc kubenswrapper[4680]: I0217 18:21:05.589589 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:21:05 crc kubenswrapper[4680]: I0217 18:21:05.589632 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 18:21:05 crc kubenswrapper[4680]: I0217 18:21:05.590176 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 18:21:05 crc kubenswrapper[4680]: I0217 18:21:05.590222 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" gracePeriod=600 Feb 17 18:21:05 crc kubenswrapper[4680]: E0217 18:21:05.711403 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:21:05 crc kubenswrapper[4680]: I0217 18:21:05.724458 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" exitCode=0 Feb 17 18:21:05 crc kubenswrapper[4680]: I0217 18:21:05.724527 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78"} Feb 17 18:21:05 crc kubenswrapper[4680]: I0217 18:21:05.724632 4680 scope.go:117] "RemoveContainer" containerID="cd71dcda81999845ebec67c7f0fbf1d96faca4f692411c4e3318e7b0e0daed8f" Feb 17 18:21:05 crc kubenswrapper[4680]: I0217 18:21:05.725836 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:21:05 crc kubenswrapper[4680]: E0217 18:21:05.726202 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:21:19 crc kubenswrapper[4680]: I0217 18:21:19.105400 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:21:19 crc kubenswrapper[4680]: E0217 18:21:19.106334 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:21:31 crc kubenswrapper[4680]: I0217 18:21:31.403180 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5wfnw"] Feb 17 18:21:31 crc kubenswrapper[4680]: I0217 18:21:31.405821 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wfnw" Feb 17 18:21:31 crc kubenswrapper[4680]: I0217 18:21:31.426019 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5wfnw"] Feb 17 18:21:31 crc kubenswrapper[4680]: I0217 18:21:31.440220 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b173045c-2ed1-4302-9855-81d80eb8529e-utilities\") pod \"certified-operators-5wfnw\" (UID: \"b173045c-2ed1-4302-9855-81d80eb8529e\") " pod="openshift-marketplace/certified-operators-5wfnw" Feb 17 18:21:31 crc kubenswrapper[4680]: I0217 18:21:31.440274 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpqc2\" (UniqueName: \"kubernetes.io/projected/b173045c-2ed1-4302-9855-81d80eb8529e-kube-api-access-lpqc2\") pod \"certified-operators-5wfnw\" (UID: \"b173045c-2ed1-4302-9855-81d80eb8529e\") " pod="openshift-marketplace/certified-operators-5wfnw" Feb 17 18:21:31 crc kubenswrapper[4680]: I0217 18:21:31.440442 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b173045c-2ed1-4302-9855-81d80eb8529e-catalog-content\") pod \"certified-operators-5wfnw\" (UID: \"b173045c-2ed1-4302-9855-81d80eb8529e\") " pod="openshift-marketplace/certified-operators-5wfnw" Feb 17 18:21:31 crc kubenswrapper[4680]: I0217 18:21:31.542520 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b173045c-2ed1-4302-9855-81d80eb8529e-utilities\") pod \"certified-operators-5wfnw\" (UID: \"b173045c-2ed1-4302-9855-81d80eb8529e\") " pod="openshift-marketplace/certified-operators-5wfnw" Feb 17 18:21:31 crc kubenswrapper[4680]: I0217 18:21:31.542585 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpqc2\" (UniqueName: \"kubernetes.io/projected/b173045c-2ed1-4302-9855-81d80eb8529e-kube-api-access-lpqc2\") pod \"certified-operators-5wfnw\" (UID: \"b173045c-2ed1-4302-9855-81d80eb8529e\") " pod="openshift-marketplace/certified-operators-5wfnw" Feb 17 18:21:31 crc kubenswrapper[4680]: I0217 18:21:31.542695 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b173045c-2ed1-4302-9855-81d80eb8529e-catalog-content\") pod \"certified-operators-5wfnw\" (UID: \"b173045c-2ed1-4302-9855-81d80eb8529e\") " pod="openshift-marketplace/certified-operators-5wfnw" Feb 17 18:21:31 crc kubenswrapper[4680]: I0217 18:21:31.543011 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b173045c-2ed1-4302-9855-81d80eb8529e-utilities\") pod \"certified-operators-5wfnw\" (UID: \"b173045c-2ed1-4302-9855-81d80eb8529e\") " pod="openshift-marketplace/certified-operators-5wfnw" Feb 17 18:21:31 crc kubenswrapper[4680]: I0217 18:21:31.543053 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b173045c-2ed1-4302-9855-81d80eb8529e-catalog-content\") pod \"certified-operators-5wfnw\" (UID: \"b173045c-2ed1-4302-9855-81d80eb8529e\") " pod="openshift-marketplace/certified-operators-5wfnw" Feb 17 18:21:31 crc kubenswrapper[4680]: I0217 18:21:31.563264 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpqc2\" (UniqueName: \"kubernetes.io/projected/b173045c-2ed1-4302-9855-81d80eb8529e-kube-api-access-lpqc2\") pod \"certified-operators-5wfnw\" (UID: \"b173045c-2ed1-4302-9855-81d80eb8529e\") " pod="openshift-marketplace/certified-operators-5wfnw" Feb 17 18:21:31 crc kubenswrapper[4680]: I0217 18:21:31.760684 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wfnw" Feb 17 18:21:32 crc kubenswrapper[4680]: I0217 18:21:32.280415 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5wfnw"] Feb 17 18:21:32 crc kubenswrapper[4680]: I0217 18:21:32.953678 4680 generic.go:334] "Generic (PLEG): container finished" podID="b173045c-2ed1-4302-9855-81d80eb8529e" containerID="1969749e7d252c16c28f269669f8d49c2caac9c836f441d10e77bafe8f7f2b29" exitCode=0 Feb 17 18:21:32 crc kubenswrapper[4680]: I0217 18:21:32.953773 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wfnw" event={"ID":"b173045c-2ed1-4302-9855-81d80eb8529e","Type":"ContainerDied","Data":"1969749e7d252c16c28f269669f8d49c2caac9c836f441d10e77bafe8f7f2b29"} Feb 17 18:21:32 crc kubenswrapper[4680]: I0217 18:21:32.953995 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wfnw" event={"ID":"b173045c-2ed1-4302-9855-81d80eb8529e","Type":"ContainerStarted","Data":"d662169f7797be6be76fd3de8480d9b9f30ef2d0fa5efa5e366dc7dbe70c0121"} Feb 17 18:21:33 crc kubenswrapper[4680]: I0217 18:21:33.105934 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:21:33 crc kubenswrapper[4680]: E0217 18:21:33.106494 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:21:33 crc kubenswrapper[4680]: I0217 18:21:33.962978 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wfnw" event={"ID":"b173045c-2ed1-4302-9855-81d80eb8529e","Type":"ContainerStarted","Data":"f7e1c91c511ade6038cff8c9003bec5bd18683d462577f0256103a1045d76739"} Feb 17 18:21:35 crc kubenswrapper[4680]: I0217 18:21:35.980608 4680 generic.go:334] "Generic (PLEG): container finished" podID="b173045c-2ed1-4302-9855-81d80eb8529e" containerID="f7e1c91c511ade6038cff8c9003bec5bd18683d462577f0256103a1045d76739" exitCode=0 Feb 17 18:21:35 crc kubenswrapper[4680]: I0217 18:21:35.980669 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wfnw" event={"ID":"b173045c-2ed1-4302-9855-81d80eb8529e","Type":"ContainerDied","Data":"f7e1c91c511ade6038cff8c9003bec5bd18683d462577f0256103a1045d76739"} Feb 17 18:21:36 crc kubenswrapper[4680]: I0217 18:21:36.989692 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wfnw" event={"ID":"b173045c-2ed1-4302-9855-81d80eb8529e","Type":"ContainerStarted","Data":"c672b2d5d5720cadc3594c08cd4666bf0ab5fb12df4aa43926302838c2d768bf"} Feb 17 18:21:37 crc kubenswrapper[4680]: I0217 18:21:37.018658 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5wfnw" podStartSLOduration=2.542758626 podStartE2EDuration="6.018635468s" podCreationTimestamp="2026-02-17 18:21:31 +0000 UTC" firstStartedPulling="2026-02-17 18:21:32.955110578 +0000 UTC m=+2222.506009257" lastFinishedPulling="2026-02-17 18:21:36.43098742 +0000 UTC m=+2225.981886099" observedRunningTime="2026-02-17 18:21:37.008547422 +0000 UTC m=+2226.559446101" watchObservedRunningTime="2026-02-17 18:21:37.018635468 +0000 UTC m=+2226.569534147" Feb 17 18:21:41 crc kubenswrapper[4680]: I0217 18:21:41.761134 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5wfnw" Feb 17 18:21:41 crc kubenswrapper[4680]: I0217 18:21:41.761646 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5wfnw" Feb 17 18:21:41 crc kubenswrapper[4680]: I0217 18:21:41.806216 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5wfnw" Feb 17 18:21:42 crc kubenswrapper[4680]: I0217 18:21:42.099074 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5wfnw" Feb 17 18:21:42 crc kubenswrapper[4680]: I0217 18:21:42.150730 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5wfnw"] Feb 17 18:21:44 crc kubenswrapper[4680]: I0217 18:21:44.049514 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5wfnw" podUID="b173045c-2ed1-4302-9855-81d80eb8529e" containerName="registry-server" containerID="cri-o://c672b2d5d5720cadc3594c08cd4666bf0ab5fb12df4aa43926302838c2d768bf" gracePeriod=2 Feb 17 18:21:44 crc kubenswrapper[4680]: I0217 18:21:44.490083 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wfnw" Feb 17 18:21:44 crc kubenswrapper[4680]: I0217 18:21:44.577565 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b173045c-2ed1-4302-9855-81d80eb8529e-catalog-content\") pod \"b173045c-2ed1-4302-9855-81d80eb8529e\" (UID: \"b173045c-2ed1-4302-9855-81d80eb8529e\") " Feb 17 18:21:44 crc kubenswrapper[4680]: I0217 18:21:44.577642 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b173045c-2ed1-4302-9855-81d80eb8529e-utilities\") pod \"b173045c-2ed1-4302-9855-81d80eb8529e\" (UID: \"b173045c-2ed1-4302-9855-81d80eb8529e\") " Feb 17 18:21:44 crc kubenswrapper[4680]: I0217 18:21:44.577663 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpqc2\" (UniqueName: \"kubernetes.io/projected/b173045c-2ed1-4302-9855-81d80eb8529e-kube-api-access-lpqc2\") pod \"b173045c-2ed1-4302-9855-81d80eb8529e\" (UID: \"b173045c-2ed1-4302-9855-81d80eb8529e\") " Feb 17 18:21:44 crc kubenswrapper[4680]: I0217 18:21:44.579759 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b173045c-2ed1-4302-9855-81d80eb8529e-utilities" (OuterVolumeSpecName: "utilities") pod "b173045c-2ed1-4302-9855-81d80eb8529e" (UID: "b173045c-2ed1-4302-9855-81d80eb8529e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:21:44 crc kubenswrapper[4680]: I0217 18:21:44.584509 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b173045c-2ed1-4302-9855-81d80eb8529e-kube-api-access-lpqc2" (OuterVolumeSpecName: "kube-api-access-lpqc2") pod "b173045c-2ed1-4302-9855-81d80eb8529e" (UID: "b173045c-2ed1-4302-9855-81d80eb8529e"). InnerVolumeSpecName "kube-api-access-lpqc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:21:44 crc kubenswrapper[4680]: I0217 18:21:44.640252 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b173045c-2ed1-4302-9855-81d80eb8529e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b173045c-2ed1-4302-9855-81d80eb8529e" (UID: "b173045c-2ed1-4302-9855-81d80eb8529e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:21:44 crc kubenswrapper[4680]: I0217 18:21:44.679900 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b173045c-2ed1-4302-9855-81d80eb8529e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:21:44 crc kubenswrapper[4680]: I0217 18:21:44.679932 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b173045c-2ed1-4302-9855-81d80eb8529e-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:21:44 crc kubenswrapper[4680]: I0217 18:21:44.679942 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpqc2\" (UniqueName: \"kubernetes.io/projected/b173045c-2ed1-4302-9855-81d80eb8529e-kube-api-access-lpqc2\") on node \"crc\" DevicePath \"\"" Feb 17 18:21:45 crc kubenswrapper[4680]: I0217 18:21:45.060159 4680 generic.go:334] "Generic (PLEG): container finished" podID="b173045c-2ed1-4302-9855-81d80eb8529e" containerID="c672b2d5d5720cadc3594c08cd4666bf0ab5fb12df4aa43926302838c2d768bf" exitCode=0 Feb 17 18:21:45 crc kubenswrapper[4680]: I0217 18:21:45.060351 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wfnw" event={"ID":"b173045c-2ed1-4302-9855-81d80eb8529e","Type":"ContainerDied","Data":"c672b2d5d5720cadc3594c08cd4666bf0ab5fb12df4aa43926302838c2d768bf"} Feb 17 18:21:45 crc kubenswrapper[4680]: I0217 18:21:45.060407 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wfnw" event={"ID":"b173045c-2ed1-4302-9855-81d80eb8529e","Type":"ContainerDied","Data":"d662169f7797be6be76fd3de8480d9b9f30ef2d0fa5efa5e366dc7dbe70c0121"} Feb 17 18:21:45 crc kubenswrapper[4680]: I0217 18:21:45.060429 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wfnw" Feb 17 18:21:45 crc kubenswrapper[4680]: I0217 18:21:45.060431 4680 scope.go:117] "RemoveContainer" containerID="c672b2d5d5720cadc3594c08cd4666bf0ab5fb12df4aa43926302838c2d768bf" Feb 17 18:21:45 crc kubenswrapper[4680]: I0217 18:21:45.089883 4680 scope.go:117] "RemoveContainer" containerID="f7e1c91c511ade6038cff8c9003bec5bd18683d462577f0256103a1045d76739" Feb 17 18:21:45 crc kubenswrapper[4680]: I0217 18:21:45.105947 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:21:45 crc kubenswrapper[4680]: E0217 18:21:45.106625 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:21:45 crc kubenswrapper[4680]: I0217 18:21:45.124686 4680 scope.go:117] "RemoveContainer" containerID="1969749e7d252c16c28f269669f8d49c2caac9c836f441d10e77bafe8f7f2b29" Feb 17 18:21:45 crc kubenswrapper[4680]: I0217 18:21:45.133780 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5wfnw"] Feb 17 18:21:45 crc kubenswrapper[4680]: I0217 18:21:45.134058 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5wfnw"] Feb 17 18:21:45 crc kubenswrapper[4680]: I0217 18:21:45.175282 4680 scope.go:117] "RemoveContainer" containerID="c672b2d5d5720cadc3594c08cd4666bf0ab5fb12df4aa43926302838c2d768bf" Feb 17 18:21:45 crc kubenswrapper[4680]: E0217 18:21:45.175789 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c672b2d5d5720cadc3594c08cd4666bf0ab5fb12df4aa43926302838c2d768bf\": container with ID starting with c672b2d5d5720cadc3594c08cd4666bf0ab5fb12df4aa43926302838c2d768bf not found: ID does not exist" containerID="c672b2d5d5720cadc3594c08cd4666bf0ab5fb12df4aa43926302838c2d768bf" Feb 17 18:21:45 crc kubenswrapper[4680]: I0217 18:21:45.175828 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c672b2d5d5720cadc3594c08cd4666bf0ab5fb12df4aa43926302838c2d768bf"} err="failed to get container status \"c672b2d5d5720cadc3594c08cd4666bf0ab5fb12df4aa43926302838c2d768bf\": rpc error: code = NotFound desc = could not find container \"c672b2d5d5720cadc3594c08cd4666bf0ab5fb12df4aa43926302838c2d768bf\": container with ID starting with c672b2d5d5720cadc3594c08cd4666bf0ab5fb12df4aa43926302838c2d768bf not found: ID does not exist" Feb 17 18:21:45 crc kubenswrapper[4680]: I0217 18:21:45.175872 4680 scope.go:117] "RemoveContainer" containerID="f7e1c91c511ade6038cff8c9003bec5bd18683d462577f0256103a1045d76739" Feb 17 18:21:45 crc kubenswrapper[4680]: E0217 18:21:45.176226 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e1c91c511ade6038cff8c9003bec5bd18683d462577f0256103a1045d76739\": container with ID starting with f7e1c91c511ade6038cff8c9003bec5bd18683d462577f0256103a1045d76739 not found: ID does not exist" containerID="f7e1c91c511ade6038cff8c9003bec5bd18683d462577f0256103a1045d76739" Feb 17 18:21:45 crc kubenswrapper[4680]: I0217 18:21:45.176270 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e1c91c511ade6038cff8c9003bec5bd18683d462577f0256103a1045d76739"} err="failed to get container status \"f7e1c91c511ade6038cff8c9003bec5bd18683d462577f0256103a1045d76739\": rpc error: code = NotFound desc = could not find container \"f7e1c91c511ade6038cff8c9003bec5bd18683d462577f0256103a1045d76739\": container with ID starting with f7e1c91c511ade6038cff8c9003bec5bd18683d462577f0256103a1045d76739 not found: ID does not exist" Feb 17 18:21:45 crc kubenswrapper[4680]: I0217 18:21:45.176383 4680 scope.go:117] "RemoveContainer" containerID="1969749e7d252c16c28f269669f8d49c2caac9c836f441d10e77bafe8f7f2b29" Feb 17 18:21:45 crc kubenswrapper[4680]: E0217 18:21:45.176843 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1969749e7d252c16c28f269669f8d49c2caac9c836f441d10e77bafe8f7f2b29\": container with ID starting with 1969749e7d252c16c28f269669f8d49c2caac9c836f441d10e77bafe8f7f2b29 not found: ID does not exist" containerID="1969749e7d252c16c28f269669f8d49c2caac9c836f441d10e77bafe8f7f2b29" Feb 17 18:21:45 crc kubenswrapper[4680]: I0217 18:21:45.176884 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1969749e7d252c16c28f269669f8d49c2caac9c836f441d10e77bafe8f7f2b29"} err="failed to get container status \"1969749e7d252c16c28f269669f8d49c2caac9c836f441d10e77bafe8f7f2b29\": rpc error: code = NotFound desc = could not find container \"1969749e7d252c16c28f269669f8d49c2caac9c836f441d10e77bafe8f7f2b29\": container with ID starting with 1969749e7d252c16c28f269669f8d49c2caac9c836f441d10e77bafe8f7f2b29 not found: ID does not exist" Feb 17 18:21:47 crc kubenswrapper[4680]: I0217 18:21:47.114534 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b173045c-2ed1-4302-9855-81d80eb8529e" path="/var/lib/kubelet/pods/b173045c-2ed1-4302-9855-81d80eb8529e/volumes" Feb 17 18:21:59 crc kubenswrapper[4680]: I0217 18:21:59.106361 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:21:59 crc kubenswrapper[4680]: E0217 18:21:59.107064 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:22:10 crc kubenswrapper[4680]: I0217 18:22:10.105406 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:22:10 crc kubenswrapper[4680]: E0217 18:22:10.106018 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:22:22 crc kubenswrapper[4680]: I0217 18:22:22.445949 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q5pvd"] Feb 17 18:22:22 crc kubenswrapper[4680]: E0217 18:22:22.447036 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b173045c-2ed1-4302-9855-81d80eb8529e" containerName="extract-utilities" Feb 17 18:22:22 crc kubenswrapper[4680]: I0217 18:22:22.447056 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b173045c-2ed1-4302-9855-81d80eb8529e" containerName="extract-utilities" Feb 17 18:22:22 crc kubenswrapper[4680]: E0217 18:22:22.447079 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b173045c-2ed1-4302-9855-81d80eb8529e" containerName="extract-content" Feb 17 18:22:22 crc kubenswrapper[4680]: I0217 18:22:22.447089 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b173045c-2ed1-4302-9855-81d80eb8529e" containerName="extract-content" Feb 17 18:22:22 crc kubenswrapper[4680]: E0217 18:22:22.447114 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b173045c-2ed1-4302-9855-81d80eb8529e" containerName="registry-server" Feb 17 18:22:22 crc kubenswrapper[4680]: I0217 18:22:22.447122 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b173045c-2ed1-4302-9855-81d80eb8529e" containerName="registry-server" Feb 17 18:22:22 crc kubenswrapper[4680]: I0217 18:22:22.447360 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="b173045c-2ed1-4302-9855-81d80eb8529e" containerName="registry-server" Feb 17 18:22:22 crc kubenswrapper[4680]: I0217 18:22:22.448876 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5pvd" Feb 17 18:22:22 crc kubenswrapper[4680]: I0217 18:22:22.462399 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q5pvd"] Feb 17 18:22:22 crc kubenswrapper[4680]: I0217 18:22:22.579920 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fb92fac-8911-486e-82b9-71ba4c8aade5-catalog-content\") pod \"redhat-operators-q5pvd\" (UID: \"5fb92fac-8911-486e-82b9-71ba4c8aade5\") " pod="openshift-marketplace/redhat-operators-q5pvd" Feb 17 18:22:22 crc kubenswrapper[4680]: I0217 18:22:22.580405 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzp5x\" (UniqueName: \"kubernetes.io/projected/5fb92fac-8911-486e-82b9-71ba4c8aade5-kube-api-access-gzp5x\") pod \"redhat-operators-q5pvd\" (UID: \"5fb92fac-8911-486e-82b9-71ba4c8aade5\") " pod="openshift-marketplace/redhat-operators-q5pvd" Feb 17 18:22:22 crc kubenswrapper[4680]: I0217 18:22:22.580552 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fb92fac-8911-486e-82b9-71ba4c8aade5-utilities\") pod \"redhat-operators-q5pvd\" (UID: \"5fb92fac-8911-486e-82b9-71ba4c8aade5\") " pod="openshift-marketplace/redhat-operators-q5pvd" Feb 17 18:22:22 crc kubenswrapper[4680]: I0217 18:22:22.682507 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzp5x\" (UniqueName: \"kubernetes.io/projected/5fb92fac-8911-486e-82b9-71ba4c8aade5-kube-api-access-gzp5x\") pod \"redhat-operators-q5pvd\" (UID: \"5fb92fac-8911-486e-82b9-71ba4c8aade5\") " pod="openshift-marketplace/redhat-operators-q5pvd" Feb 17 18:22:22 crc kubenswrapper[4680]: I0217 18:22:22.682616 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fb92fac-8911-486e-82b9-71ba4c8aade5-utilities\") pod \"redhat-operators-q5pvd\" (UID: \"5fb92fac-8911-486e-82b9-71ba4c8aade5\") " pod="openshift-marketplace/redhat-operators-q5pvd" Feb 17 18:22:22 crc kubenswrapper[4680]: I0217 18:22:22.682704 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fb92fac-8911-486e-82b9-71ba4c8aade5-catalog-content\") pod \"redhat-operators-q5pvd\" (UID: \"5fb92fac-8911-486e-82b9-71ba4c8aade5\") " pod="openshift-marketplace/redhat-operators-q5pvd" Feb 17 18:22:22 crc kubenswrapper[4680]: I0217 18:22:22.683108 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fb92fac-8911-486e-82b9-71ba4c8aade5-utilities\") pod \"redhat-operators-q5pvd\" (UID: \"5fb92fac-8911-486e-82b9-71ba4c8aade5\") " pod="openshift-marketplace/redhat-operators-q5pvd" Feb 17 18:22:22 crc kubenswrapper[4680]: I0217 18:22:22.683136 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fb92fac-8911-486e-82b9-71ba4c8aade5-catalog-content\") pod \"redhat-operators-q5pvd\" (UID: \"5fb92fac-8911-486e-82b9-71ba4c8aade5\") " pod="openshift-marketplace/redhat-operators-q5pvd" Feb 17 18:22:22 crc kubenswrapper[4680]: I0217 18:22:22.701525 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzp5x\" (UniqueName: \"kubernetes.io/projected/5fb92fac-8911-486e-82b9-71ba4c8aade5-kube-api-access-gzp5x\") pod \"redhat-operators-q5pvd\" (UID: \"5fb92fac-8911-486e-82b9-71ba4c8aade5\") " pod="openshift-marketplace/redhat-operators-q5pvd" Feb 17 18:22:22 crc kubenswrapper[4680]: I0217 18:22:22.769006 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5pvd" Feb 17 18:22:23 crc kubenswrapper[4680]: I0217 18:22:23.227963 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q5pvd"] Feb 17 18:22:23 crc kubenswrapper[4680]: I0217 18:22:23.393736 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5pvd" event={"ID":"5fb92fac-8911-486e-82b9-71ba4c8aade5","Type":"ContainerStarted","Data":"2edd374afbb665a998b489fa1dd596548a51d6f74dbd310e1d32a175340d086d"} Feb 17 18:22:24 crc kubenswrapper[4680]: I0217 18:22:24.413557 4680 generic.go:334] "Generic (PLEG): container finished" podID="5fb92fac-8911-486e-82b9-71ba4c8aade5" containerID="86957f7914aee8bbd9aa392f77c6332747a618d423000418325832078619f5dc" exitCode=0 Feb 17 18:22:24 crc kubenswrapper[4680]: I0217 18:22:24.414011 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5pvd" event={"ID":"5fb92fac-8911-486e-82b9-71ba4c8aade5","Type":"ContainerDied","Data":"86957f7914aee8bbd9aa392f77c6332747a618d423000418325832078619f5dc"} Feb 17 18:22:24 crc kubenswrapper[4680]: I0217 18:22:24.416552 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 18:22:25 crc kubenswrapper[4680]: I0217 18:22:25.105077 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:22:25 crc kubenswrapper[4680]: E0217 18:22:25.105546 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:22:25 crc kubenswrapper[4680]: I0217 18:22:25.424117 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5pvd" event={"ID":"5fb92fac-8911-486e-82b9-71ba4c8aade5","Type":"ContainerStarted","Data":"d81a89f3b7ba22efebdc6dbb337d79718475b424c00c7673631fdd96a12f0537"} Feb 17 18:22:29 crc kubenswrapper[4680]: I0217 18:22:29.468422 4680 generic.go:334] "Generic (PLEG): container finished" podID="5fb92fac-8911-486e-82b9-71ba4c8aade5" containerID="d81a89f3b7ba22efebdc6dbb337d79718475b424c00c7673631fdd96a12f0537" exitCode=0 Feb 17 18:22:29 crc kubenswrapper[4680]: I0217 18:22:29.469462 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5pvd" event={"ID":"5fb92fac-8911-486e-82b9-71ba4c8aade5","Type":"ContainerDied","Data":"d81a89f3b7ba22efebdc6dbb337d79718475b424c00c7673631fdd96a12f0537"} Feb 17 18:22:30 crc kubenswrapper[4680]: I0217 18:22:30.478648 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5pvd" event={"ID":"5fb92fac-8911-486e-82b9-71ba4c8aade5","Type":"ContainerStarted","Data":"5722cbf41688e99964949118ef2825b18757df7d2464bc2ab56d558e06dd6e20"} Feb 17 18:22:30 crc kubenswrapper[4680]: I0217 18:22:30.504618 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q5pvd" podStartSLOduration=3.057039789 podStartE2EDuration="8.504594177s" podCreationTimestamp="2026-02-17 18:22:22 +0000 UTC" firstStartedPulling="2026-02-17 18:22:24.41614306 +0000 UTC m=+2273.967041739" lastFinishedPulling="2026-02-17 18:22:29.863697448 +0000 UTC m=+2279.414596127" observedRunningTime="2026-02-17 18:22:30.498545568 +0000 UTC m=+2280.049444277" watchObservedRunningTime="2026-02-17 18:22:30.504594177 +0000 UTC m=+2280.055492856" Feb 17 18:22:32 crc kubenswrapper[4680]: I0217 18:22:32.770250 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q5pvd" Feb 17 18:22:32 crc kubenswrapper[4680]: I0217 18:22:32.770870 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q5pvd" Feb 17 18:22:33 crc kubenswrapper[4680]: I0217 18:22:33.831950 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5pvd" podUID="5fb92fac-8911-486e-82b9-71ba4c8aade5" containerName="registry-server" probeResult="failure" output=< Feb 17 18:22:33 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 18:22:33 crc kubenswrapper[4680]: > Feb 17 18:22:40 crc kubenswrapper[4680]: I0217 18:22:40.134523 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:22:40 crc kubenswrapper[4680]: E0217 18:22:40.135500 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:22:42 crc kubenswrapper[4680]: I0217 18:22:42.819605 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q5pvd" Feb 17 18:22:42 crc kubenswrapper[4680]: I0217 18:22:42.871913 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q5pvd" Feb 17 18:22:43 crc kubenswrapper[4680]: I0217 18:22:43.067070 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q5pvd"] Feb 17 18:22:44 crc kubenswrapper[4680]: I0217 18:22:44.635933 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q5pvd" podUID="5fb92fac-8911-486e-82b9-71ba4c8aade5" containerName="registry-server" containerID="cri-o://5722cbf41688e99964949118ef2825b18757df7d2464bc2ab56d558e06dd6e20" gracePeriod=2 Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.107459 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5pvd" Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.131845 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fb92fac-8911-486e-82b9-71ba4c8aade5-catalog-content\") pod \"5fb92fac-8911-486e-82b9-71ba4c8aade5\" (UID: \"5fb92fac-8911-486e-82b9-71ba4c8aade5\") " Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.131956 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fb92fac-8911-486e-82b9-71ba4c8aade5-utilities\") pod \"5fb92fac-8911-486e-82b9-71ba4c8aade5\" (UID: \"5fb92fac-8911-486e-82b9-71ba4c8aade5\") " Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.132050 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzp5x\" (UniqueName: \"kubernetes.io/projected/5fb92fac-8911-486e-82b9-71ba4c8aade5-kube-api-access-gzp5x\") pod \"5fb92fac-8911-486e-82b9-71ba4c8aade5\" (UID: \"5fb92fac-8911-486e-82b9-71ba4c8aade5\") " Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.132878 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fb92fac-8911-486e-82b9-71ba4c8aade5-utilities" (OuterVolumeSpecName: "utilities") pod "5fb92fac-8911-486e-82b9-71ba4c8aade5" (UID: "5fb92fac-8911-486e-82b9-71ba4c8aade5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.139916 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fb92fac-8911-486e-82b9-71ba4c8aade5-kube-api-access-gzp5x" (OuterVolumeSpecName: "kube-api-access-gzp5x") pod "5fb92fac-8911-486e-82b9-71ba4c8aade5" (UID: "5fb92fac-8911-486e-82b9-71ba4c8aade5"). InnerVolumeSpecName "kube-api-access-gzp5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.234109 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzp5x\" (UniqueName: \"kubernetes.io/projected/5fb92fac-8911-486e-82b9-71ba4c8aade5-kube-api-access-gzp5x\") on node \"crc\" DevicePath \"\"" Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.234151 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fb92fac-8911-486e-82b9-71ba4c8aade5-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.260077 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fb92fac-8911-486e-82b9-71ba4c8aade5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fb92fac-8911-486e-82b9-71ba4c8aade5" (UID: "5fb92fac-8911-486e-82b9-71ba4c8aade5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.335781 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fb92fac-8911-486e-82b9-71ba4c8aade5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.647964 4680 generic.go:334] "Generic (PLEG): container finished" podID="5fb92fac-8911-486e-82b9-71ba4c8aade5" containerID="5722cbf41688e99964949118ef2825b18757df7d2464bc2ab56d558e06dd6e20" exitCode=0 Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.648010 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5pvd" event={"ID":"5fb92fac-8911-486e-82b9-71ba4c8aade5","Type":"ContainerDied","Data":"5722cbf41688e99964949118ef2825b18757df7d2464bc2ab56d558e06dd6e20"} Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.648026 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5pvd" Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.648035 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5pvd" event={"ID":"5fb92fac-8911-486e-82b9-71ba4c8aade5","Type":"ContainerDied","Data":"2edd374afbb665a998b489fa1dd596548a51d6f74dbd310e1d32a175340d086d"} Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.648053 4680 scope.go:117] "RemoveContainer" containerID="5722cbf41688e99964949118ef2825b18757df7d2464bc2ab56d558e06dd6e20" Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.683537 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q5pvd"] Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.689943 4680 scope.go:117] "RemoveContainer" containerID="d81a89f3b7ba22efebdc6dbb337d79718475b424c00c7673631fdd96a12f0537" Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.695393 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q5pvd"] Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.711172 4680 scope.go:117] "RemoveContainer" containerID="86957f7914aee8bbd9aa392f77c6332747a618d423000418325832078619f5dc" Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.745135 4680 scope.go:117] "RemoveContainer" containerID="5722cbf41688e99964949118ef2825b18757df7d2464bc2ab56d558e06dd6e20" Feb 17 18:22:45 crc kubenswrapper[4680]: E0217 18:22:45.745594 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5722cbf41688e99964949118ef2825b18757df7d2464bc2ab56d558e06dd6e20\": container with ID starting with 5722cbf41688e99964949118ef2825b18757df7d2464bc2ab56d558e06dd6e20 not found: ID does not exist" containerID="5722cbf41688e99964949118ef2825b18757df7d2464bc2ab56d558e06dd6e20" Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.745623 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5722cbf41688e99964949118ef2825b18757df7d2464bc2ab56d558e06dd6e20"} err="failed to get container status \"5722cbf41688e99964949118ef2825b18757df7d2464bc2ab56d558e06dd6e20\": rpc error: code = NotFound desc = could not find container \"5722cbf41688e99964949118ef2825b18757df7d2464bc2ab56d558e06dd6e20\": container with ID starting with 5722cbf41688e99964949118ef2825b18757df7d2464bc2ab56d558e06dd6e20 not found: ID does not exist" Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.745642 4680 scope.go:117] "RemoveContainer" containerID="d81a89f3b7ba22efebdc6dbb337d79718475b424c00c7673631fdd96a12f0537" Feb 17 18:22:45 crc kubenswrapper[4680]: E0217 18:22:45.745931 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d81a89f3b7ba22efebdc6dbb337d79718475b424c00c7673631fdd96a12f0537\": container with ID starting with d81a89f3b7ba22efebdc6dbb337d79718475b424c00c7673631fdd96a12f0537 not found: ID does not exist" containerID="d81a89f3b7ba22efebdc6dbb337d79718475b424c00c7673631fdd96a12f0537" Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.745968 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81a89f3b7ba22efebdc6dbb337d79718475b424c00c7673631fdd96a12f0537"} err="failed to get container status \"d81a89f3b7ba22efebdc6dbb337d79718475b424c00c7673631fdd96a12f0537\": rpc error: code = NotFound desc = could not find container \"d81a89f3b7ba22efebdc6dbb337d79718475b424c00c7673631fdd96a12f0537\": container with ID starting with d81a89f3b7ba22efebdc6dbb337d79718475b424c00c7673631fdd96a12f0537 not found: ID does not exist" Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.745992 4680 scope.go:117] "RemoveContainer" containerID="86957f7914aee8bbd9aa392f77c6332747a618d423000418325832078619f5dc" Feb 17 18:22:45 crc kubenswrapper[4680]: E0217 18:22:45.746415 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86957f7914aee8bbd9aa392f77c6332747a618d423000418325832078619f5dc\": container with ID starting with 86957f7914aee8bbd9aa392f77c6332747a618d423000418325832078619f5dc not found: ID does not exist" containerID="86957f7914aee8bbd9aa392f77c6332747a618d423000418325832078619f5dc" Feb 17 18:22:45 crc kubenswrapper[4680]: I0217 18:22:45.746438 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86957f7914aee8bbd9aa392f77c6332747a618d423000418325832078619f5dc"} err="failed to get container status \"86957f7914aee8bbd9aa392f77c6332747a618d423000418325832078619f5dc\": rpc error: code = NotFound desc = could not find container \"86957f7914aee8bbd9aa392f77c6332747a618d423000418325832078619f5dc\": container with ID starting with 86957f7914aee8bbd9aa392f77c6332747a618d423000418325832078619f5dc not found: ID does not exist" Feb 17 18:22:47 crc kubenswrapper[4680]: I0217 18:22:47.129810 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fb92fac-8911-486e-82b9-71ba4c8aade5" path="/var/lib/kubelet/pods/5fb92fac-8911-486e-82b9-71ba4c8aade5/volumes" Feb 17 18:22:52 crc kubenswrapper[4680]: I0217 18:22:52.105944 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:22:52 crc kubenswrapper[4680]: E0217 18:22:52.106692 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:23:05 crc kubenswrapper[4680]: I0217 18:23:05.105435 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:23:05 crc kubenswrapper[4680]: E0217 18:23:05.106176 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:23:16 crc kubenswrapper[4680]: I0217 18:23:16.105564 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:23:16 crc kubenswrapper[4680]: E0217 18:23:16.106269 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:23:27 crc kubenswrapper[4680]: I0217 18:23:27.105832 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:23:27 crc kubenswrapper[4680]: E0217 18:23:27.106433 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:23:38 crc kubenswrapper[4680]: I0217 18:23:38.043855 4680 generic.go:334] "Generic (PLEG): container finished" podID="d1713773-5461-439e-8d37-5c1f2b54db0a" containerID="b4388965afc8478c1d2059167a99a89be760ec5a36de37a6b470de8120070ad6" exitCode=0 Feb 17 18:23:38 crc kubenswrapper[4680]: I0217 18:23:38.043942 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" event={"ID":"d1713773-5461-439e-8d37-5c1f2b54db0a","Type":"ContainerDied","Data":"b4388965afc8478c1d2059167a99a89be760ec5a36de37a6b470de8120070ad6"} Feb 17 18:23:39 crc kubenswrapper[4680]: I0217 18:23:39.446821 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:23:39 crc kubenswrapper[4680]: I0217 18:23:39.633259 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-ssh-key-openstack-edpm-ipam\") pod \"d1713773-5461-439e-8d37-5c1f2b54db0a\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " Feb 17 18:23:39 crc kubenswrapper[4680]: I0217 18:23:39.633579 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-inventory\") pod \"d1713773-5461-439e-8d37-5c1f2b54db0a\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " Feb 17 18:23:39 crc kubenswrapper[4680]: I0217 18:23:39.633666 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5nxf\" (UniqueName: \"kubernetes.io/projected/d1713773-5461-439e-8d37-5c1f2b54db0a-kube-api-access-x5nxf\") pod \"d1713773-5461-439e-8d37-5c1f2b54db0a\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " Feb 17 18:23:39 crc kubenswrapper[4680]: I0217 18:23:39.633696 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-libvirt-secret-0\") pod \"d1713773-5461-439e-8d37-5c1f2b54db0a\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " Feb 17 18:23:39 crc kubenswrapper[4680]: I0217 18:23:39.633903 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-libvirt-combined-ca-bundle\") pod \"d1713773-5461-439e-8d37-5c1f2b54db0a\" (UID: \"d1713773-5461-439e-8d37-5c1f2b54db0a\") " Feb 17 18:23:39 crc kubenswrapper[4680]: I0217 18:23:39.639769 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d1713773-5461-439e-8d37-5c1f2b54db0a" (UID: "d1713773-5461-439e-8d37-5c1f2b54db0a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:23:39 crc kubenswrapper[4680]: I0217 18:23:39.639812 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1713773-5461-439e-8d37-5c1f2b54db0a-kube-api-access-x5nxf" (OuterVolumeSpecName: "kube-api-access-x5nxf") pod "d1713773-5461-439e-8d37-5c1f2b54db0a" (UID: "d1713773-5461-439e-8d37-5c1f2b54db0a"). InnerVolumeSpecName "kube-api-access-x5nxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:23:39 crc kubenswrapper[4680]: I0217 18:23:39.668830 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d1713773-5461-439e-8d37-5c1f2b54db0a" (UID: "d1713773-5461-439e-8d37-5c1f2b54db0a"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:23:39 crc kubenswrapper[4680]: I0217 18:23:39.669617 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d1713773-5461-439e-8d37-5c1f2b54db0a" (UID: "d1713773-5461-439e-8d37-5c1f2b54db0a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:23:39 crc kubenswrapper[4680]: I0217 18:23:39.669950 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-inventory" (OuterVolumeSpecName: "inventory") pod "d1713773-5461-439e-8d37-5c1f2b54db0a" (UID: "d1713773-5461-439e-8d37-5c1f2b54db0a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:23:39 crc kubenswrapper[4680]: I0217 18:23:39.736328 4680 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:23:39 crc kubenswrapper[4680]: I0217 18:23:39.736355 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:23:39 crc kubenswrapper[4680]: I0217 18:23:39.736366 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 18:23:39 crc kubenswrapper[4680]: I0217 18:23:39.736375 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5nxf\" (UniqueName: \"kubernetes.io/projected/d1713773-5461-439e-8d37-5c1f2b54db0a-kube-api-access-x5nxf\") on node \"crc\" DevicePath \"\"" Feb 17 18:23:39 crc kubenswrapper[4680]: I0217 18:23:39.736385 4680 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d1713773-5461-439e-8d37-5c1f2b54db0a-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.059834 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" event={"ID":"d1713773-5461-439e-8d37-5c1f2b54db0a","Type":"ContainerDied","Data":"6f3953459cad3b08c45f1db918d00c3e8a7663a54a4ac9df3d16c2cdaaa09423"} Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.059881 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f3953459cad3b08c45f1db918d00c3e8a7663a54a4ac9df3d16c2cdaaa09423" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.059949 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-65z5s" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.185381 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45"] Feb 17 18:23:40 crc kubenswrapper[4680]: E0217 18:23:40.185826 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb92fac-8911-486e-82b9-71ba4c8aade5" containerName="extract-utilities" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.185844 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb92fac-8911-486e-82b9-71ba4c8aade5" containerName="extract-utilities" Feb 17 18:23:40 crc kubenswrapper[4680]: E0217 18:23:40.185860 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1713773-5461-439e-8d37-5c1f2b54db0a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.185881 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1713773-5461-439e-8d37-5c1f2b54db0a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 17 18:23:40 crc kubenswrapper[4680]: E0217 18:23:40.185894 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb92fac-8911-486e-82b9-71ba4c8aade5" containerName="registry-server" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.185900 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb92fac-8911-486e-82b9-71ba4c8aade5" containerName="registry-server" Feb 17 18:23:40 crc kubenswrapper[4680]: E0217 18:23:40.185912 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb92fac-8911-486e-82b9-71ba4c8aade5" containerName="extract-content" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.185917 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb92fac-8911-486e-82b9-71ba4c8aade5" containerName="extract-content" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.186124 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb92fac-8911-486e-82b9-71ba4c8aade5" containerName="registry-server" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.186138 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1713773-5461-439e-8d37-5c1f2b54db0a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.186883 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.189700 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.189773 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.189932 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mhdwq" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.190065 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.190067 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.190273 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.190771 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.202616 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45"] Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.348648 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.348729 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.348762 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.348923 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.348979 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.349006 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/111c14f3-6274-434d-a73d-09328a365bd2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.349139 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krhsz\" (UniqueName: \"kubernetes.io/projected/111c14f3-6274-434d-a73d-09328a365bd2-kube-api-access-krhsz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.349192 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.349294 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.349343 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.349405 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.451170 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.451223 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.451271 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.451349 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.451422 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.451457 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.451504 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.451537 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/111c14f3-6274-434d-a73d-09328a365bd2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.451558 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.451623 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krhsz\" (UniqueName: \"kubernetes.io/projected/111c14f3-6274-434d-a73d-09328a365bd2-kube-api-access-krhsz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.451664 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.452514 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/111c14f3-6274-434d-a73d-09328a365bd2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.454676 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.454821 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.455305 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.455597 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.457109 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.458548 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.459781 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.459955 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.461382 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.470670 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krhsz\" (UniqueName: \"kubernetes.io/projected/111c14f3-6274-434d-a73d-09328a365bd2-kube-api-access-krhsz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-plf45\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:40 crc kubenswrapper[4680]: I0217 18:23:40.546181 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:23:41 crc kubenswrapper[4680]: I0217 18:23:41.028910 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45"] Feb 17 18:23:41 crc kubenswrapper[4680]: I0217 18:23:41.068275 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" event={"ID":"111c14f3-6274-434d-a73d-09328a365bd2","Type":"ContainerStarted","Data":"40eefc3dbf4a9d710676ab008d67664b8b984e6eb35019e55023ee113a2c58da"} Feb 17 18:23:41 crc kubenswrapper[4680]: I0217 18:23:41.114069 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:23:41 crc kubenswrapper[4680]: E0217 18:23:41.114268 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:23:42 crc kubenswrapper[4680]: I0217 18:23:42.078155 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" event={"ID":"111c14f3-6274-434d-a73d-09328a365bd2","Type":"ContainerStarted","Data":"8000d0b573397a57863b47abe43ae4dad6b82cb977daf8ef2af555f8c98e226a"} Feb 17 18:23:42 crc kubenswrapper[4680]: I0217 18:23:42.107568 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" podStartSLOduration=1.55220237 podStartE2EDuration="2.10754904s" podCreationTimestamp="2026-02-17 18:23:40 +0000 UTC" firstStartedPulling="2026-02-17 18:23:41.033570523 +0000 UTC m=+2350.584469202" lastFinishedPulling="2026-02-17 18:23:41.588917193 +0000 UTC m=+2351.139815872" observedRunningTime="2026-02-17 18:23:42.095054267 +0000 UTC m=+2351.645952966" watchObservedRunningTime="2026-02-17 18:23:42.10754904 +0000 UTC m=+2351.658447729" Feb 17 18:23:54 crc kubenswrapper[4680]: I0217 18:23:54.105489 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:23:54 crc kubenswrapper[4680]: E0217 18:23:54.106243 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:24:06 crc kubenswrapper[4680]: I0217 18:24:06.104988 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:24:06 crc kubenswrapper[4680]: E0217 18:24:06.105628 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:24:17 crc kubenswrapper[4680]: I0217 18:24:17.105654 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:24:17 crc kubenswrapper[4680]: E0217 18:24:17.106469 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:24:32 crc kubenswrapper[4680]: I0217 18:24:32.105360 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:24:32 crc kubenswrapper[4680]: E0217 18:24:32.106129 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:24:45 crc kubenswrapper[4680]: I0217 18:24:45.105325 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:24:45 crc kubenswrapper[4680]: E0217 18:24:45.106051 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:25:00 crc kubenswrapper[4680]: I0217 18:25:00.105700 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:25:00 crc kubenswrapper[4680]: E0217 18:25:00.106509 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:25:13 crc kubenswrapper[4680]: I0217 18:25:13.105299 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:25:13 crc kubenswrapper[4680]: E0217 18:25:13.106167 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:25:25 crc kubenswrapper[4680]: I0217 18:25:25.105983 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:25:25 crc kubenswrapper[4680]: E0217 18:25:25.106719 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:25:40 crc kubenswrapper[4680]: I0217 18:25:40.104903 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:25:40 crc kubenswrapper[4680]: E0217 18:25:40.107475 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:25:51 crc kubenswrapper[4680]: I0217 18:25:51.111051 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:25:51 crc kubenswrapper[4680]: E0217 18:25:51.111756 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:26:02 crc kubenswrapper[4680]: I0217 18:26:02.105153 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:26:02 crc kubenswrapper[4680]: E0217 18:26:02.105812 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:26:03 crc kubenswrapper[4680]: I0217 18:26:03.235225 4680 generic.go:334] "Generic (PLEG): container finished" podID="111c14f3-6274-434d-a73d-09328a365bd2" containerID="8000d0b573397a57863b47abe43ae4dad6b82cb977daf8ef2af555f8c98e226a" exitCode=0 Feb 17 18:26:03 crc kubenswrapper[4680]: I0217 18:26:03.235348 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" event={"ID":"111c14f3-6274-434d-a73d-09328a365bd2","Type":"ContainerDied","Data":"8000d0b573397a57863b47abe43ae4dad6b82cb977daf8ef2af555f8c98e226a"} Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.694789 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.776754 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krhsz\" (UniqueName: \"kubernetes.io/projected/111c14f3-6274-434d-a73d-09328a365bd2-kube-api-access-krhsz\") pod \"111c14f3-6274-434d-a73d-09328a365bd2\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.776813 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/111c14f3-6274-434d-a73d-09328a365bd2-nova-extra-config-0\") pod \"111c14f3-6274-434d-a73d-09328a365bd2\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.776952 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-inventory\") pod \"111c14f3-6274-434d-a73d-09328a365bd2\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.776982 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-3\") pod \"111c14f3-6274-434d-a73d-09328a365bd2\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.777011 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-ssh-key-openstack-edpm-ipam\") pod \"111c14f3-6274-434d-a73d-09328a365bd2\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.777036 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-0\") pod \"111c14f3-6274-434d-a73d-09328a365bd2\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.777064 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-1\") pod \"111c14f3-6274-434d-a73d-09328a365bd2\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.777101 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-2\") pod \"111c14f3-6274-434d-a73d-09328a365bd2\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.777123 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-migration-ssh-key-1\") pod \"111c14f3-6274-434d-a73d-09328a365bd2\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.777152 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-combined-ca-bundle\") pod \"111c14f3-6274-434d-a73d-09328a365bd2\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.777296 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-migration-ssh-key-0\") pod \"111c14f3-6274-434d-a73d-09328a365bd2\" (UID: \"111c14f3-6274-434d-a73d-09328a365bd2\") " Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.782751 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/111c14f3-6274-434d-a73d-09328a365bd2-kube-api-access-krhsz" (OuterVolumeSpecName: "kube-api-access-krhsz") pod "111c14f3-6274-434d-a73d-09328a365bd2" (UID: "111c14f3-6274-434d-a73d-09328a365bd2"). InnerVolumeSpecName "kube-api-access-krhsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.783502 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "111c14f3-6274-434d-a73d-09328a365bd2" (UID: "111c14f3-6274-434d-a73d-09328a365bd2"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.811896 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "111c14f3-6274-434d-a73d-09328a365bd2" (UID: "111c14f3-6274-434d-a73d-09328a365bd2"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.815533 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-inventory" (OuterVolumeSpecName: "inventory") pod "111c14f3-6274-434d-a73d-09328a365bd2" (UID: "111c14f3-6274-434d-a73d-09328a365bd2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.823674 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/111c14f3-6274-434d-a73d-09328a365bd2-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "111c14f3-6274-434d-a73d-09328a365bd2" (UID: "111c14f3-6274-434d-a73d-09328a365bd2"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.826144 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "111c14f3-6274-434d-a73d-09328a365bd2" (UID: "111c14f3-6274-434d-a73d-09328a365bd2"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.831144 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "111c14f3-6274-434d-a73d-09328a365bd2" (UID: "111c14f3-6274-434d-a73d-09328a365bd2"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.833100 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "111c14f3-6274-434d-a73d-09328a365bd2" (UID: "111c14f3-6274-434d-a73d-09328a365bd2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.833170 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "111c14f3-6274-434d-a73d-09328a365bd2" (UID: "111c14f3-6274-434d-a73d-09328a365bd2"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.853713 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "111c14f3-6274-434d-a73d-09328a365bd2" (UID: "111c14f3-6274-434d-a73d-09328a365bd2"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.856874 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "111c14f3-6274-434d-a73d-09328a365bd2" (UID: "111c14f3-6274-434d-a73d-09328a365bd2"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.883287 4680 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.883347 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krhsz\" (UniqueName: \"kubernetes.io/projected/111c14f3-6274-434d-a73d-09328a365bd2-kube-api-access-krhsz\") on node \"crc\" DevicePath \"\"" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.883358 4680 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/111c14f3-6274-434d-a73d-09328a365bd2-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.883368 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.883401 4680 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.883409 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.883418 4680 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.883428 4680 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.883436 4680 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.883445 4680 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 17 18:26:04 crc kubenswrapper[4680]: I0217 18:26:04.883469 4680 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/111c14f3-6274-434d-a73d-09328a365bd2-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.258557 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" event={"ID":"111c14f3-6274-434d-a73d-09328a365bd2","Type":"ContainerDied","Data":"40eefc3dbf4a9d710676ab008d67664b8b984e6eb35019e55023ee113a2c58da"} Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.258597 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40eefc3dbf4a9d710676ab008d67664b8b984e6eb35019e55023ee113a2c58da" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.258638 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-plf45" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.369092 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz"] Feb 17 18:26:05 crc kubenswrapper[4680]: E0217 18:26:05.369494 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111c14f3-6274-434d-a73d-09328a365bd2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.369524 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="111c14f3-6274-434d-a73d-09328a365bd2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.369771 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="111c14f3-6274-434d-a73d-09328a365bd2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.370452 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.374287 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.374429 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.374485 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.374693 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.379833 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mhdwq" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.382446 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz"] Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.493696 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.493991 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.494110 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gpqv\" (UniqueName: \"kubernetes.io/projected/030c3571-afbe-48b3-9a59-890b63527ab7-kube-api-access-6gpqv\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.494419 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.494606 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.494711 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.494923 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.597051 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.597178 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.597211 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.597236 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gpqv\" (UniqueName: \"kubernetes.io/projected/030c3571-afbe-48b3-9a59-890b63527ab7-kube-api-access-6gpqv\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.597284 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.597365 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.597388 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.602823 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.602908 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.603370 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.605167 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.605632 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.605679 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.620203 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gpqv\" (UniqueName: \"kubernetes.io/projected/030c3571-afbe-48b3-9a59-890b63527ab7-kube-api-access-6gpqv\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-grdvz\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:05 crc kubenswrapper[4680]: I0217 18:26:05.694091 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:26:06 crc kubenswrapper[4680]: I0217 18:26:06.424702 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz"] Feb 17 18:26:07 crc kubenswrapper[4680]: I0217 18:26:07.276705 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" event={"ID":"030c3571-afbe-48b3-9a59-890b63527ab7","Type":"ContainerStarted","Data":"8eb444673a86cd26f977a3f170a4b81e6c8592a58954723a83ced0aac5b8de07"} Feb 17 18:26:07 crc kubenswrapper[4680]: I0217 18:26:07.277011 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" event={"ID":"030c3571-afbe-48b3-9a59-890b63527ab7","Type":"ContainerStarted","Data":"6e87fc8f13b16da737aa7a3595626d6b541807d92ae24fb677781c5f79f9acaf"} Feb 17 18:26:13 crc kubenswrapper[4680]: I0217 18:26:13.106088 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:26:13 crc kubenswrapper[4680]: I0217 18:26:13.327617 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"67e68953afa70326e197284902024efbdd6d091acf5e877d90934e6fbf164508"} Feb 17 18:26:13 crc kubenswrapper[4680]: I0217 18:26:13.349744 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" podStartSLOduration=7.935029929 podStartE2EDuration="8.34972467s" podCreationTimestamp="2026-02-17 18:26:05 +0000 UTC" firstStartedPulling="2026-02-17 18:26:06.437445013 +0000 UTC m=+2495.988343692" lastFinishedPulling="2026-02-17 18:26:06.852139754 +0000 UTC m=+2496.403038433" observedRunningTime="2026-02-17 18:26:07.300708528 +0000 UTC m=+2496.851607207" watchObservedRunningTime="2026-02-17 18:26:13.34972467 +0000 UTC m=+2502.900623349" Feb 17 18:28:35 crc kubenswrapper[4680]: I0217 18:28:35.589250 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:28:35 crc kubenswrapper[4680]: I0217 18:28:35.589842 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:29:05 crc kubenswrapper[4680]: I0217 18:29:05.589534 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:29:05 crc kubenswrapper[4680]: I0217 18:29:05.593963 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:29:06 crc kubenswrapper[4680]: I0217 18:29:06.699104 4680 generic.go:334] "Generic (PLEG): container finished" podID="030c3571-afbe-48b3-9a59-890b63527ab7" containerID="8eb444673a86cd26f977a3f170a4b81e6c8592a58954723a83ced0aac5b8de07" exitCode=0 Feb 17 18:29:06 crc kubenswrapper[4680]: I0217 18:29:06.699191 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" event={"ID":"030c3571-afbe-48b3-9a59-890b63527ab7","Type":"ContainerDied","Data":"8eb444673a86cd26f977a3f170a4b81e6c8592a58954723a83ced0aac5b8de07"} Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.070924 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.217146 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-telemetry-combined-ca-bundle\") pod \"030c3571-afbe-48b3-9a59-890b63527ab7\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.217260 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ssh-key-openstack-edpm-ipam\") pod \"030c3571-afbe-48b3-9a59-890b63527ab7\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.217289 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-2\") pod \"030c3571-afbe-48b3-9a59-890b63527ab7\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.217392 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-1\") pod \"030c3571-afbe-48b3-9a59-890b63527ab7\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.217519 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-inventory\") pod \"030c3571-afbe-48b3-9a59-890b63527ab7\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.217559 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-0\") pod \"030c3571-afbe-48b3-9a59-890b63527ab7\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.217606 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gpqv\" (UniqueName: \"kubernetes.io/projected/030c3571-afbe-48b3-9a59-890b63527ab7-kube-api-access-6gpqv\") pod \"030c3571-afbe-48b3-9a59-890b63527ab7\" (UID: \"030c3571-afbe-48b3-9a59-890b63527ab7\") " Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.251675 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "030c3571-afbe-48b3-9a59-890b63527ab7" (UID: "030c3571-afbe-48b3-9a59-890b63527ab7"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.257803 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/030c3571-afbe-48b3-9a59-890b63527ab7-kube-api-access-6gpqv" (OuterVolumeSpecName: "kube-api-access-6gpqv") pod "030c3571-afbe-48b3-9a59-890b63527ab7" (UID: "030c3571-afbe-48b3-9a59-890b63527ab7"). InnerVolumeSpecName "kube-api-access-6gpqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.291437 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-inventory" (OuterVolumeSpecName: "inventory") pod "030c3571-afbe-48b3-9a59-890b63527ab7" (UID: "030c3571-afbe-48b3-9a59-890b63527ab7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.291479 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "030c3571-afbe-48b3-9a59-890b63527ab7" (UID: "030c3571-afbe-48b3-9a59-890b63527ab7"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.321955 4680 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.321982 4680 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.321993 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gpqv\" (UniqueName: \"kubernetes.io/projected/030c3571-afbe-48b3-9a59-890b63527ab7-kube-api-access-6gpqv\") on node \"crc\" DevicePath \"\"" Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.322003 4680 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.322248 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "030c3571-afbe-48b3-9a59-890b63527ab7" (UID: "030c3571-afbe-48b3-9a59-890b63527ab7"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.329573 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "030c3571-afbe-48b3-9a59-890b63527ab7" (UID: "030c3571-afbe-48b3-9a59-890b63527ab7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.350869 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "030c3571-afbe-48b3-9a59-890b63527ab7" (UID: "030c3571-afbe-48b3-9a59-890b63527ab7"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.423977 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.424019 4680 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.424032 4680 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/030c3571-afbe-48b3-9a59-890b63527ab7-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.716999 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" event={"ID":"030c3571-afbe-48b3-9a59-890b63527ab7","Type":"ContainerDied","Data":"6e87fc8f13b16da737aa7a3595626d6b541807d92ae24fb677781c5f79f9acaf"} Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.717037 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e87fc8f13b16da737aa7a3595626d6b541807d92ae24fb677781c5f79f9acaf" Feb 17 18:29:08 crc kubenswrapper[4680]: I0217 18:29:08.717113 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-grdvz" Feb 17 18:29:35 crc kubenswrapper[4680]: I0217 18:29:35.588607 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:29:35 crc kubenswrapper[4680]: I0217 18:29:35.589161 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:29:35 crc kubenswrapper[4680]: I0217 18:29:35.589208 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 18:29:35 crc kubenswrapper[4680]: I0217 18:29:35.589926 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67e68953afa70326e197284902024efbdd6d091acf5e877d90934e6fbf164508"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 18:29:35 crc kubenswrapper[4680]: I0217 18:29:35.589991 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://67e68953afa70326e197284902024efbdd6d091acf5e877d90934e6fbf164508" gracePeriod=600 Feb 17 18:29:35 crc kubenswrapper[4680]: I0217 18:29:35.932831 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="67e68953afa70326e197284902024efbdd6d091acf5e877d90934e6fbf164508" exitCode=0 Feb 17 18:29:35 crc kubenswrapper[4680]: I0217 18:29:35.932900 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"67e68953afa70326e197284902024efbdd6d091acf5e877d90934e6fbf164508"} Feb 17 18:29:35 crc kubenswrapper[4680]: I0217 18:29:35.933148 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262"} Feb 17 18:29:35 crc kubenswrapper[4680]: I0217 18:29:35.933178 4680 scope.go:117] "RemoveContainer" containerID="05e9321314b8bcfdfcee519e4b39d83405420f17828fd66310ce237cc3c9ab78" Feb 17 18:29:49 crc kubenswrapper[4680]: I0217 18:29:49.054997 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6ptrj"] Feb 17 18:29:49 crc kubenswrapper[4680]: E0217 18:29:49.055800 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030c3571-afbe-48b3-9a59-890b63527ab7" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 18:29:49 crc kubenswrapper[4680]: I0217 18:29:49.055814 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="030c3571-afbe-48b3-9a59-890b63527ab7" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 18:29:49 crc kubenswrapper[4680]: I0217 18:29:49.055979 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="030c3571-afbe-48b3-9a59-890b63527ab7" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 18:29:49 crc kubenswrapper[4680]: I0217 18:29:49.057248 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6ptrj" Feb 17 18:29:49 crc kubenswrapper[4680]: I0217 18:29:49.078725 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ptrj"] Feb 17 18:29:49 crc kubenswrapper[4680]: I0217 18:29:49.095158 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb451da-2120-45f1-91d9-8f9a0125dc3e-catalog-content\") pod \"redhat-marketplace-6ptrj\" (UID: \"7fb451da-2120-45f1-91d9-8f9a0125dc3e\") " pod="openshift-marketplace/redhat-marketplace-6ptrj" Feb 17 18:29:49 crc kubenswrapper[4680]: I0217 18:29:49.095272 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb451da-2120-45f1-91d9-8f9a0125dc3e-utilities\") pod \"redhat-marketplace-6ptrj\" (UID: \"7fb451da-2120-45f1-91d9-8f9a0125dc3e\") " pod="openshift-marketplace/redhat-marketplace-6ptrj" Feb 17 18:29:49 crc kubenswrapper[4680]: I0217 18:29:49.095304 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26czw\" (UniqueName: \"kubernetes.io/projected/7fb451da-2120-45f1-91d9-8f9a0125dc3e-kube-api-access-26czw\") pod \"redhat-marketplace-6ptrj\" (UID: \"7fb451da-2120-45f1-91d9-8f9a0125dc3e\") " pod="openshift-marketplace/redhat-marketplace-6ptrj" Feb 17 18:29:49 crc kubenswrapper[4680]: I0217 18:29:49.196691 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb451da-2120-45f1-91d9-8f9a0125dc3e-catalog-content\") pod \"redhat-marketplace-6ptrj\" (UID: \"7fb451da-2120-45f1-91d9-8f9a0125dc3e\") " pod="openshift-marketplace/redhat-marketplace-6ptrj" Feb 17 18:29:49 crc kubenswrapper[4680]: I0217 18:29:49.197142 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb451da-2120-45f1-91d9-8f9a0125dc3e-utilities\") pod \"redhat-marketplace-6ptrj\" (UID: \"7fb451da-2120-45f1-91d9-8f9a0125dc3e\") " pod="openshift-marketplace/redhat-marketplace-6ptrj" Feb 17 18:29:49 crc kubenswrapper[4680]: I0217 18:29:49.197189 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26czw\" (UniqueName: \"kubernetes.io/projected/7fb451da-2120-45f1-91d9-8f9a0125dc3e-kube-api-access-26czw\") pod \"redhat-marketplace-6ptrj\" (UID: \"7fb451da-2120-45f1-91d9-8f9a0125dc3e\") " pod="openshift-marketplace/redhat-marketplace-6ptrj" Feb 17 18:29:49 crc kubenswrapper[4680]: I0217 18:29:49.197187 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb451da-2120-45f1-91d9-8f9a0125dc3e-catalog-content\") pod \"redhat-marketplace-6ptrj\" (UID: \"7fb451da-2120-45f1-91d9-8f9a0125dc3e\") " pod="openshift-marketplace/redhat-marketplace-6ptrj" Feb 17 18:29:49 crc kubenswrapper[4680]: I0217 18:29:49.198486 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb451da-2120-45f1-91d9-8f9a0125dc3e-utilities\") pod \"redhat-marketplace-6ptrj\" (UID: \"7fb451da-2120-45f1-91d9-8f9a0125dc3e\") " pod="openshift-marketplace/redhat-marketplace-6ptrj" Feb 17 18:29:49 crc kubenswrapper[4680]: I0217 18:29:49.218201 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26czw\" (UniqueName: \"kubernetes.io/projected/7fb451da-2120-45f1-91d9-8f9a0125dc3e-kube-api-access-26czw\") pod \"redhat-marketplace-6ptrj\" (UID: \"7fb451da-2120-45f1-91d9-8f9a0125dc3e\") " pod="openshift-marketplace/redhat-marketplace-6ptrj" Feb 17 18:29:49 crc kubenswrapper[4680]: I0217 18:29:49.375038 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6ptrj" Feb 17 18:29:49 crc kubenswrapper[4680]: I0217 18:29:49.930124 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ptrj"] Feb 17 18:29:50 crc kubenswrapper[4680]: I0217 18:29:50.342398 4680 generic.go:334] "Generic (PLEG): container finished" podID="7fb451da-2120-45f1-91d9-8f9a0125dc3e" containerID="92329fe961490d39482f8f31e8ac59737a637e7a63374feff9f44311292d5c70" exitCode=0 Feb 17 18:29:50 crc kubenswrapper[4680]: I0217 18:29:50.342554 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ptrj" event={"ID":"7fb451da-2120-45f1-91d9-8f9a0125dc3e","Type":"ContainerDied","Data":"92329fe961490d39482f8f31e8ac59737a637e7a63374feff9f44311292d5c70"} Feb 17 18:29:50 crc kubenswrapper[4680]: I0217 18:29:50.342690 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ptrj" event={"ID":"7fb451da-2120-45f1-91d9-8f9a0125dc3e","Type":"ContainerStarted","Data":"7e1228cbd1fabb93d7e4c1a2dc3735e231a9bed4df37df72fdf694bc22c2f076"} Feb 17 18:29:50 crc kubenswrapper[4680]: I0217 18:29:50.344996 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 18:29:52 crc kubenswrapper[4680]: I0217 18:29:52.363767 4680 generic.go:334] "Generic (PLEG): container finished" podID="7fb451da-2120-45f1-91d9-8f9a0125dc3e" containerID="c332f76a30dd09b1abd8f9e5090e8999f9abd595af4ebae16686f93c2542e5a5" exitCode=0 Feb 17 18:29:52 crc kubenswrapper[4680]: I0217 18:29:52.363824 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ptrj" event={"ID":"7fb451da-2120-45f1-91d9-8f9a0125dc3e","Type":"ContainerDied","Data":"c332f76a30dd09b1abd8f9e5090e8999f9abd595af4ebae16686f93c2542e5a5"} Feb 17 18:29:52 crc kubenswrapper[4680]: I0217 18:29:52.844017 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cqql9"] Feb 17 18:29:52 crc kubenswrapper[4680]: I0217 18:29:52.846656 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqql9" Feb 17 18:29:52 crc kubenswrapper[4680]: I0217 18:29:52.859005 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cqql9"] Feb 17 18:29:52 crc kubenswrapper[4680]: I0217 18:29:52.898613 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghwdl\" (UniqueName: \"kubernetes.io/projected/b2c93f8c-eba4-48f2-9e5f-19004460d393-kube-api-access-ghwdl\") pod \"community-operators-cqql9\" (UID: \"b2c93f8c-eba4-48f2-9e5f-19004460d393\") " pod="openshift-marketplace/community-operators-cqql9" Feb 17 18:29:52 crc kubenswrapper[4680]: I0217 18:29:52.898707 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2c93f8c-eba4-48f2-9e5f-19004460d393-utilities\") pod \"community-operators-cqql9\" (UID: \"b2c93f8c-eba4-48f2-9e5f-19004460d393\") " pod="openshift-marketplace/community-operators-cqql9" Feb 17 18:29:52 crc kubenswrapper[4680]: I0217 18:29:52.898752 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2c93f8c-eba4-48f2-9e5f-19004460d393-catalog-content\") pod \"community-operators-cqql9\" (UID: \"b2c93f8c-eba4-48f2-9e5f-19004460d393\") " pod="openshift-marketplace/community-operators-cqql9" Feb 17 18:29:53 crc kubenswrapper[4680]: I0217 18:29:53.000637 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghwdl\" (UniqueName: \"kubernetes.io/projected/b2c93f8c-eba4-48f2-9e5f-19004460d393-kube-api-access-ghwdl\") pod \"community-operators-cqql9\" (UID: \"b2c93f8c-eba4-48f2-9e5f-19004460d393\") " pod="openshift-marketplace/community-operators-cqql9" Feb 17 18:29:53 crc kubenswrapper[4680]: I0217 18:29:53.000720 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2c93f8c-eba4-48f2-9e5f-19004460d393-utilities\") pod \"community-operators-cqql9\" (UID: \"b2c93f8c-eba4-48f2-9e5f-19004460d393\") " pod="openshift-marketplace/community-operators-cqql9" Feb 17 18:29:53 crc kubenswrapper[4680]: I0217 18:29:53.000770 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2c93f8c-eba4-48f2-9e5f-19004460d393-catalog-content\") pod \"community-operators-cqql9\" (UID: \"b2c93f8c-eba4-48f2-9e5f-19004460d393\") " pod="openshift-marketplace/community-operators-cqql9" Feb 17 18:29:53 crc kubenswrapper[4680]: I0217 18:29:53.001164 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2c93f8c-eba4-48f2-9e5f-19004460d393-catalog-content\") pod \"community-operators-cqql9\" (UID: \"b2c93f8c-eba4-48f2-9e5f-19004460d393\") " pod="openshift-marketplace/community-operators-cqql9" Feb 17 18:29:53 crc kubenswrapper[4680]: I0217 18:29:53.001499 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2c93f8c-eba4-48f2-9e5f-19004460d393-utilities\") pod \"community-operators-cqql9\" (UID: \"b2c93f8c-eba4-48f2-9e5f-19004460d393\") " pod="openshift-marketplace/community-operators-cqql9" Feb 17 18:29:53 crc kubenswrapper[4680]: I0217 18:29:53.027237 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghwdl\" (UniqueName: \"kubernetes.io/projected/b2c93f8c-eba4-48f2-9e5f-19004460d393-kube-api-access-ghwdl\") pod \"community-operators-cqql9\" (UID: \"b2c93f8c-eba4-48f2-9e5f-19004460d393\") " pod="openshift-marketplace/community-operators-cqql9" Feb 17 18:29:53 crc kubenswrapper[4680]: I0217 18:29:53.175862 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqql9" Feb 17 18:29:53 crc kubenswrapper[4680]: I0217 18:29:53.376991 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ptrj" event={"ID":"7fb451da-2120-45f1-91d9-8f9a0125dc3e","Type":"ContainerStarted","Data":"3945e7acb7bcf7e5c34b79ec69bc23ea21a6c0784ce8d8317bfa2e3f1ab06740"} Feb 17 18:29:53 crc kubenswrapper[4680]: I0217 18:29:53.418041 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6ptrj" podStartSLOduration=1.955319408 podStartE2EDuration="4.418027344s" podCreationTimestamp="2026-02-17 18:29:49 +0000 UTC" firstStartedPulling="2026-02-17 18:29:50.344125566 +0000 UTC m=+2719.895024245" lastFinishedPulling="2026-02-17 18:29:52.806833462 +0000 UTC m=+2722.357732181" observedRunningTime="2026-02-17 18:29:53.40072005 +0000 UTC m=+2722.951618739" watchObservedRunningTime="2026-02-17 18:29:53.418027344 +0000 UTC m=+2722.968926023" Feb 17 18:29:53 crc kubenswrapper[4680]: I0217 18:29:53.866105 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cqql9"] Feb 17 18:29:54 crc kubenswrapper[4680]: I0217 18:29:54.387677 4680 generic.go:334] "Generic (PLEG): container finished" podID="b2c93f8c-eba4-48f2-9e5f-19004460d393" containerID="3bf398600688ba7d3aae1f3ee7c0dfbcf403de8c12bb7c2c413d75b7eab6e710" exitCode=0 Feb 17 18:29:54 crc kubenswrapper[4680]: I0217 18:29:54.387763 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqql9" event={"ID":"b2c93f8c-eba4-48f2-9e5f-19004460d393","Type":"ContainerDied","Data":"3bf398600688ba7d3aae1f3ee7c0dfbcf403de8c12bb7c2c413d75b7eab6e710"} Feb 17 18:29:54 crc kubenswrapper[4680]: I0217 18:29:54.388054 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqql9" event={"ID":"b2c93f8c-eba4-48f2-9e5f-19004460d393","Type":"ContainerStarted","Data":"082355b947e3a6aefa60f51878afab212ae1b10f69934c75f4134cb8ea47a1b8"} Feb 17 18:29:55 crc kubenswrapper[4680]: I0217 18:29:55.396365 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqql9" event={"ID":"b2c93f8c-eba4-48f2-9e5f-19004460d393","Type":"ContainerStarted","Data":"556c183ada86bcd5806292728e4d464f133a480c765086d749a68417d92ada06"} Feb 17 18:29:56 crc kubenswrapper[4680]: I0217 18:29:56.410846 4680 generic.go:334] "Generic (PLEG): container finished" podID="b2c93f8c-eba4-48f2-9e5f-19004460d393" containerID="556c183ada86bcd5806292728e4d464f133a480c765086d749a68417d92ada06" exitCode=0 Feb 17 18:29:56 crc kubenswrapper[4680]: I0217 18:29:56.411075 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqql9" event={"ID":"b2c93f8c-eba4-48f2-9e5f-19004460d393","Type":"ContainerDied","Data":"556c183ada86bcd5806292728e4d464f133a480c765086d749a68417d92ada06"} Feb 17 18:29:57 crc kubenswrapper[4680]: I0217 18:29:57.421582 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqql9" event={"ID":"b2c93f8c-eba4-48f2-9e5f-19004460d393","Type":"ContainerStarted","Data":"18105ca0b308bd60323b8dde5f180692ba6d3b625be080fe213640e715eaf242"} Feb 17 18:29:57 crc kubenswrapper[4680]: I0217 18:29:57.448942 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cqql9" podStartSLOduration=2.97553924 podStartE2EDuration="5.448927382s" podCreationTimestamp="2026-02-17 18:29:52 +0000 UTC" firstStartedPulling="2026-02-17 18:29:54.389279792 +0000 UTC m=+2723.940178511" lastFinishedPulling="2026-02-17 18:29:56.862667964 +0000 UTC m=+2726.413566653" observedRunningTime="2026-02-17 18:29:57.443266324 +0000 UTC m=+2726.994165013" watchObservedRunningTime="2026-02-17 18:29:57.448927382 +0000 UTC m=+2726.999826061" Feb 17 18:29:59 crc kubenswrapper[4680]: I0217 18:29:59.375666 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6ptrj" Feb 17 18:29:59 crc kubenswrapper[4680]: I0217 18:29:59.375958 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6ptrj" Feb 17 18:29:59 crc kubenswrapper[4680]: I0217 18:29:59.416799 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6ptrj" Feb 17 18:29:59 crc kubenswrapper[4680]: I0217 18:29:59.514503 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6ptrj" Feb 17 18:30:00 crc kubenswrapper[4680]: I0217 18:30:00.146610 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr"] Feb 17 18:30:00 crc kubenswrapper[4680]: I0217 18:30:00.148136 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr" Feb 17 18:30:00 crc kubenswrapper[4680]: I0217 18:30:00.150891 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 18:30:00 crc kubenswrapper[4680]: I0217 18:30:00.151503 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 18:30:00 crc kubenswrapper[4680]: I0217 18:30:00.161940 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr"] Feb 17 18:30:00 crc kubenswrapper[4680]: I0217 18:30:00.247985 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96sf4\" (UniqueName: \"kubernetes.io/projected/b06e1201-c49f-4cd4-a794-5beecc1223af-kube-api-access-96sf4\") pod \"collect-profiles-29522550-xxwbr\" (UID: \"b06e1201-c49f-4cd4-a794-5beecc1223af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr" Feb 17 18:30:00 crc kubenswrapper[4680]: I0217 18:30:00.248672 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b06e1201-c49f-4cd4-a794-5beecc1223af-secret-volume\") pod \"collect-profiles-29522550-xxwbr\" (UID: \"b06e1201-c49f-4cd4-a794-5beecc1223af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr" Feb 17 18:30:00 crc kubenswrapper[4680]: I0217 18:30:00.248831 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b06e1201-c49f-4cd4-a794-5beecc1223af-config-volume\") pod \"collect-profiles-29522550-xxwbr\" (UID: \"b06e1201-c49f-4cd4-a794-5beecc1223af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr" Feb 17 18:30:00 crc kubenswrapper[4680]: I0217 18:30:00.350765 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b06e1201-c49f-4cd4-a794-5beecc1223af-secret-volume\") pod \"collect-profiles-29522550-xxwbr\" (UID: \"b06e1201-c49f-4cd4-a794-5beecc1223af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr" Feb 17 18:30:00 crc kubenswrapper[4680]: I0217 18:30:00.350839 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b06e1201-c49f-4cd4-a794-5beecc1223af-config-volume\") pod \"collect-profiles-29522550-xxwbr\" (UID: \"b06e1201-c49f-4cd4-a794-5beecc1223af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr" Feb 17 18:30:00 crc kubenswrapper[4680]: I0217 18:30:00.350968 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96sf4\" (UniqueName: \"kubernetes.io/projected/b06e1201-c49f-4cd4-a794-5beecc1223af-kube-api-access-96sf4\") pod \"collect-profiles-29522550-xxwbr\" (UID: \"b06e1201-c49f-4cd4-a794-5beecc1223af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr" Feb 17 18:30:00 crc kubenswrapper[4680]: I0217 18:30:00.352755 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b06e1201-c49f-4cd4-a794-5beecc1223af-config-volume\") pod \"collect-profiles-29522550-xxwbr\" (UID: \"b06e1201-c49f-4cd4-a794-5beecc1223af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr" Feb 17 18:30:00 crc kubenswrapper[4680]: I0217 18:30:00.370094 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96sf4\" (UniqueName: \"kubernetes.io/projected/b06e1201-c49f-4cd4-a794-5beecc1223af-kube-api-access-96sf4\") pod \"collect-profiles-29522550-xxwbr\" (UID: \"b06e1201-c49f-4cd4-a794-5beecc1223af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr" Feb 17 18:30:00 crc kubenswrapper[4680]: I0217 18:30:00.372225 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b06e1201-c49f-4cd4-a794-5beecc1223af-secret-volume\") pod \"collect-profiles-29522550-xxwbr\" (UID: \"b06e1201-c49f-4cd4-a794-5beecc1223af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr" Feb 17 18:30:00 crc kubenswrapper[4680]: I0217 18:30:00.472238 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr" Feb 17 18:30:00 crc kubenswrapper[4680]: I0217 18:30:00.638242 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ptrj"] Feb 17 18:30:00 crc kubenswrapper[4680]: I0217 18:30:00.908631 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr"] Feb 17 18:30:00 crc kubenswrapper[4680]: W0217 18:30:00.916824 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb06e1201_c49f_4cd4_a794_5beecc1223af.slice/crio-8e595d51c64edad1d7a5069c8c865ec556964ae8e16753c97b5e1e8458ab0ef9 WatchSource:0}: Error finding container 8e595d51c64edad1d7a5069c8c865ec556964ae8e16753c97b5e1e8458ab0ef9: Status 404 returned error can't find the container with id 8e595d51c64edad1d7a5069c8c865ec556964ae8e16753c97b5e1e8458ab0ef9 Feb 17 18:30:01 crc kubenswrapper[4680]: I0217 18:30:01.480062 4680 generic.go:334] "Generic (PLEG): container finished" podID="b06e1201-c49f-4cd4-a794-5beecc1223af" containerID="06e4219645b0df2bb0b27095ac29410191b0c9c3fa31302c120c680a50b31a13" exitCode=0 Feb 17 18:30:01 crc kubenswrapper[4680]: I0217 18:30:01.480163 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr" event={"ID":"b06e1201-c49f-4cd4-a794-5beecc1223af","Type":"ContainerDied","Data":"06e4219645b0df2bb0b27095ac29410191b0c9c3fa31302c120c680a50b31a13"} Feb 17 18:30:01 crc kubenswrapper[4680]: I0217 18:30:01.480424 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr" event={"ID":"b06e1201-c49f-4cd4-a794-5beecc1223af","Type":"ContainerStarted","Data":"8e595d51c64edad1d7a5069c8c865ec556964ae8e16753c97b5e1e8458ab0ef9"} Feb 17 18:30:01 crc kubenswrapper[4680]: I0217 18:30:01.480554 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6ptrj" podUID="7fb451da-2120-45f1-91d9-8f9a0125dc3e" containerName="registry-server" containerID="cri-o://3945e7acb7bcf7e5c34b79ec69bc23ea21a6c0784ce8d8317bfa2e3f1ab06740" gracePeriod=2 Feb 17 18:30:01 crc kubenswrapper[4680]: I0217 18:30:01.980759 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6ptrj" Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.105186 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb451da-2120-45f1-91d9-8f9a0125dc3e-utilities\") pod \"7fb451da-2120-45f1-91d9-8f9a0125dc3e\" (UID: \"7fb451da-2120-45f1-91d9-8f9a0125dc3e\") " Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.105592 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb451da-2120-45f1-91d9-8f9a0125dc3e-catalog-content\") pod \"7fb451da-2120-45f1-91d9-8f9a0125dc3e\" (UID: \"7fb451da-2120-45f1-91d9-8f9a0125dc3e\") " Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.105799 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26czw\" (UniqueName: \"kubernetes.io/projected/7fb451da-2120-45f1-91d9-8f9a0125dc3e-kube-api-access-26czw\") pod \"7fb451da-2120-45f1-91d9-8f9a0125dc3e\" (UID: \"7fb451da-2120-45f1-91d9-8f9a0125dc3e\") " Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.105839 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fb451da-2120-45f1-91d9-8f9a0125dc3e-utilities" (OuterVolumeSpecName: "utilities") pod "7fb451da-2120-45f1-91d9-8f9a0125dc3e" (UID: "7fb451da-2120-45f1-91d9-8f9a0125dc3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.106240 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb451da-2120-45f1-91d9-8f9a0125dc3e-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.113575 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb451da-2120-45f1-91d9-8f9a0125dc3e-kube-api-access-26czw" (OuterVolumeSpecName: "kube-api-access-26czw") pod "7fb451da-2120-45f1-91d9-8f9a0125dc3e" (UID: "7fb451da-2120-45f1-91d9-8f9a0125dc3e"). InnerVolumeSpecName "kube-api-access-26czw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.128940 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fb451da-2120-45f1-91d9-8f9a0125dc3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fb451da-2120-45f1-91d9-8f9a0125dc3e" (UID: "7fb451da-2120-45f1-91d9-8f9a0125dc3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.207680 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb451da-2120-45f1-91d9-8f9a0125dc3e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.207714 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26czw\" (UniqueName: \"kubernetes.io/projected/7fb451da-2120-45f1-91d9-8f9a0125dc3e-kube-api-access-26czw\") on node \"crc\" DevicePath \"\"" Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.489524 4680 generic.go:334] "Generic (PLEG): container finished" podID="7fb451da-2120-45f1-91d9-8f9a0125dc3e" containerID="3945e7acb7bcf7e5c34b79ec69bc23ea21a6c0784ce8d8317bfa2e3f1ab06740" exitCode=0 Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.489575 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6ptrj" Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.489597 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ptrj" event={"ID":"7fb451da-2120-45f1-91d9-8f9a0125dc3e","Type":"ContainerDied","Data":"3945e7acb7bcf7e5c34b79ec69bc23ea21a6c0784ce8d8317bfa2e3f1ab06740"} Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.489643 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ptrj" event={"ID":"7fb451da-2120-45f1-91d9-8f9a0125dc3e","Type":"ContainerDied","Data":"7e1228cbd1fabb93d7e4c1a2dc3735e231a9bed4df37df72fdf694bc22c2f076"} Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.489662 4680 scope.go:117] "RemoveContainer" containerID="3945e7acb7bcf7e5c34b79ec69bc23ea21a6c0784ce8d8317bfa2e3f1ab06740" Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.510729 4680 scope.go:117] "RemoveContainer" containerID="c332f76a30dd09b1abd8f9e5090e8999f9abd595af4ebae16686f93c2542e5a5" Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.530097 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ptrj"] Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.537302 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ptrj"] Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.560224 4680 scope.go:117] "RemoveContainer" containerID="92329fe961490d39482f8f31e8ac59737a637e7a63374feff9f44311292d5c70" Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.601124 4680 scope.go:117] "RemoveContainer" containerID="3945e7acb7bcf7e5c34b79ec69bc23ea21a6c0784ce8d8317bfa2e3f1ab06740" Feb 17 18:30:02 crc kubenswrapper[4680]: E0217 18:30:02.602781 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3945e7acb7bcf7e5c34b79ec69bc23ea21a6c0784ce8d8317bfa2e3f1ab06740\": container with ID starting with 3945e7acb7bcf7e5c34b79ec69bc23ea21a6c0784ce8d8317bfa2e3f1ab06740 not found: ID does not exist" containerID="3945e7acb7bcf7e5c34b79ec69bc23ea21a6c0784ce8d8317bfa2e3f1ab06740" Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.602828 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3945e7acb7bcf7e5c34b79ec69bc23ea21a6c0784ce8d8317bfa2e3f1ab06740"} err="failed to get container status \"3945e7acb7bcf7e5c34b79ec69bc23ea21a6c0784ce8d8317bfa2e3f1ab06740\": rpc error: code = NotFound desc = could not find container \"3945e7acb7bcf7e5c34b79ec69bc23ea21a6c0784ce8d8317bfa2e3f1ab06740\": container with ID starting with 3945e7acb7bcf7e5c34b79ec69bc23ea21a6c0784ce8d8317bfa2e3f1ab06740 not found: ID does not exist" Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.602850 4680 scope.go:117] "RemoveContainer" containerID="c332f76a30dd09b1abd8f9e5090e8999f9abd595af4ebae16686f93c2542e5a5" Feb 17 18:30:02 crc kubenswrapper[4680]: E0217 18:30:02.603527 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c332f76a30dd09b1abd8f9e5090e8999f9abd595af4ebae16686f93c2542e5a5\": container with ID starting with c332f76a30dd09b1abd8f9e5090e8999f9abd595af4ebae16686f93c2542e5a5 not found: ID does not exist" containerID="c332f76a30dd09b1abd8f9e5090e8999f9abd595af4ebae16686f93c2542e5a5" Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.603551 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c332f76a30dd09b1abd8f9e5090e8999f9abd595af4ebae16686f93c2542e5a5"} err="failed to get container status \"c332f76a30dd09b1abd8f9e5090e8999f9abd595af4ebae16686f93c2542e5a5\": rpc error: code = NotFound desc = could not find container \"c332f76a30dd09b1abd8f9e5090e8999f9abd595af4ebae16686f93c2542e5a5\": container with ID starting with c332f76a30dd09b1abd8f9e5090e8999f9abd595af4ebae16686f93c2542e5a5 not found: ID does not exist" Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.603567 4680 scope.go:117] "RemoveContainer" containerID="92329fe961490d39482f8f31e8ac59737a637e7a63374feff9f44311292d5c70" Feb 17 18:30:02 crc kubenswrapper[4680]: E0217 18:30:02.603791 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92329fe961490d39482f8f31e8ac59737a637e7a63374feff9f44311292d5c70\": container with ID starting with 92329fe961490d39482f8f31e8ac59737a637e7a63374feff9f44311292d5c70 not found: ID does not exist" containerID="92329fe961490d39482f8f31e8ac59737a637e7a63374feff9f44311292d5c70" Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.603815 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92329fe961490d39482f8f31e8ac59737a637e7a63374feff9f44311292d5c70"} err="failed to get container status \"92329fe961490d39482f8f31e8ac59737a637e7a63374feff9f44311292d5c70\": rpc error: code = NotFound desc = could not find container \"92329fe961490d39482f8f31e8ac59737a637e7a63374feff9f44311292d5c70\": container with ID starting with 92329fe961490d39482f8f31e8ac59737a637e7a63374feff9f44311292d5c70 not found: ID does not exist" Feb 17 18:30:02 crc kubenswrapper[4680]: I0217 18:30:02.844706 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr" Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.022814 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96sf4\" (UniqueName: \"kubernetes.io/projected/b06e1201-c49f-4cd4-a794-5beecc1223af-kube-api-access-96sf4\") pod \"b06e1201-c49f-4cd4-a794-5beecc1223af\" (UID: \"b06e1201-c49f-4cd4-a794-5beecc1223af\") " Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.022920 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b06e1201-c49f-4cd4-a794-5beecc1223af-config-volume\") pod \"b06e1201-c49f-4cd4-a794-5beecc1223af\" (UID: \"b06e1201-c49f-4cd4-a794-5beecc1223af\") " Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.023051 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b06e1201-c49f-4cd4-a794-5beecc1223af-secret-volume\") pod \"b06e1201-c49f-4cd4-a794-5beecc1223af\" (UID: \"b06e1201-c49f-4cd4-a794-5beecc1223af\") " Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.023616 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b06e1201-c49f-4cd4-a794-5beecc1223af-config-volume" (OuterVolumeSpecName: "config-volume") pod "b06e1201-c49f-4cd4-a794-5beecc1223af" (UID: "b06e1201-c49f-4cd4-a794-5beecc1223af"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.027668 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b06e1201-c49f-4cd4-a794-5beecc1223af-kube-api-access-96sf4" (OuterVolumeSpecName: "kube-api-access-96sf4") pod "b06e1201-c49f-4cd4-a794-5beecc1223af" (UID: "b06e1201-c49f-4cd4-a794-5beecc1223af"). InnerVolumeSpecName "kube-api-access-96sf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.028519 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06e1201-c49f-4cd4-a794-5beecc1223af-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b06e1201-c49f-4cd4-a794-5beecc1223af" (UID: "b06e1201-c49f-4cd4-a794-5beecc1223af"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.116576 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fb451da-2120-45f1-91d9-8f9a0125dc3e" path="/var/lib/kubelet/pods/7fb451da-2120-45f1-91d9-8f9a0125dc3e/volumes" Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.125732 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96sf4\" (UniqueName: \"kubernetes.io/projected/b06e1201-c49f-4cd4-a794-5beecc1223af-kube-api-access-96sf4\") on node \"crc\" DevicePath \"\"" Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.125772 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b06e1201-c49f-4cd4-a794-5beecc1223af-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.125787 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b06e1201-c49f-4cd4-a794-5beecc1223af-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.176690 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cqql9" Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.176740 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cqql9" Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.220708 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cqql9" Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.499765 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr" event={"ID":"b06e1201-c49f-4cd4-a794-5beecc1223af","Type":"ContainerDied","Data":"8e595d51c64edad1d7a5069c8c865ec556964ae8e16753c97b5e1e8458ab0ef9"} Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.499807 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e595d51c64edad1d7a5069c8c865ec556964ae8e16753c97b5e1e8458ab0ef9" Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.500534 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr" Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.545542 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cqql9" Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.913450 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2"] Feb 17 18:30:03 crc kubenswrapper[4680]: I0217 18:30:03.920931 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522505-8tbt2"] Feb 17 18:30:05 crc kubenswrapper[4680]: I0217 18:30:05.054251 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cqql9"] Feb 17 18:30:05 crc kubenswrapper[4680]: I0217 18:30:05.123700 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b6a027a-aad9-445b-8bab-ac0004a7c565" path="/var/lib/kubelet/pods/4b6a027a-aad9-445b-8bab-ac0004a7c565/volumes" Feb 17 18:30:05 crc kubenswrapper[4680]: I0217 18:30:05.519363 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cqql9" podUID="b2c93f8c-eba4-48f2-9e5f-19004460d393" containerName="registry-server" containerID="cri-o://18105ca0b308bd60323b8dde5f180692ba6d3b625be080fe213640e715eaf242" gracePeriod=2 Feb 17 18:30:05 crc kubenswrapper[4680]: I0217 18:30:05.975608 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqql9" Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.080398 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2c93f8c-eba4-48f2-9e5f-19004460d393-catalog-content\") pod \"b2c93f8c-eba4-48f2-9e5f-19004460d393\" (UID: \"b2c93f8c-eba4-48f2-9e5f-19004460d393\") " Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.080528 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2c93f8c-eba4-48f2-9e5f-19004460d393-utilities\") pod \"b2c93f8c-eba4-48f2-9e5f-19004460d393\" (UID: \"b2c93f8c-eba4-48f2-9e5f-19004460d393\") " Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.080733 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghwdl\" (UniqueName: \"kubernetes.io/projected/b2c93f8c-eba4-48f2-9e5f-19004460d393-kube-api-access-ghwdl\") pod \"b2c93f8c-eba4-48f2-9e5f-19004460d393\" (UID: \"b2c93f8c-eba4-48f2-9e5f-19004460d393\") " Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.081551 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2c93f8c-eba4-48f2-9e5f-19004460d393-utilities" (OuterVolumeSpecName: "utilities") pod "b2c93f8c-eba4-48f2-9e5f-19004460d393" (UID: "b2c93f8c-eba4-48f2-9e5f-19004460d393"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.085960 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c93f8c-eba4-48f2-9e5f-19004460d393-kube-api-access-ghwdl" (OuterVolumeSpecName: "kube-api-access-ghwdl") pod "b2c93f8c-eba4-48f2-9e5f-19004460d393" (UID: "b2c93f8c-eba4-48f2-9e5f-19004460d393"). InnerVolumeSpecName "kube-api-access-ghwdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.144924 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2c93f8c-eba4-48f2-9e5f-19004460d393-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2c93f8c-eba4-48f2-9e5f-19004460d393" (UID: "b2c93f8c-eba4-48f2-9e5f-19004460d393"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.183670 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2c93f8c-eba4-48f2-9e5f-19004460d393-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.183715 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2c93f8c-eba4-48f2-9e5f-19004460d393-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.183731 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghwdl\" (UniqueName: \"kubernetes.io/projected/b2c93f8c-eba4-48f2-9e5f-19004460d393-kube-api-access-ghwdl\") on node \"crc\" DevicePath \"\"" Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.531262 4680 generic.go:334] "Generic (PLEG): container finished" podID="b2c93f8c-eba4-48f2-9e5f-19004460d393" containerID="18105ca0b308bd60323b8dde5f180692ba6d3b625be080fe213640e715eaf242" exitCode=0 Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.531350 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqql9" event={"ID":"b2c93f8c-eba4-48f2-9e5f-19004460d393","Type":"ContainerDied","Data":"18105ca0b308bd60323b8dde5f180692ba6d3b625be080fe213640e715eaf242"} Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.531584 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqql9" event={"ID":"b2c93f8c-eba4-48f2-9e5f-19004460d393","Type":"ContainerDied","Data":"082355b947e3a6aefa60f51878afab212ae1b10f69934c75f4134cb8ea47a1b8"} Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.531610 4680 scope.go:117] "RemoveContainer" containerID="18105ca0b308bd60323b8dde5f180692ba6d3b625be080fe213640e715eaf242" Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.531520 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqql9" Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.548832 4680 scope.go:117] "RemoveContainer" containerID="556c183ada86bcd5806292728e4d464f133a480c765086d749a68417d92ada06" Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.580789 4680 scope.go:117] "RemoveContainer" containerID="3bf398600688ba7d3aae1f3ee7c0dfbcf403de8c12bb7c2c413d75b7eab6e710" Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.587714 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cqql9"] Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.600504 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cqql9"] Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.614271 4680 scope.go:117] "RemoveContainer" containerID="18105ca0b308bd60323b8dde5f180692ba6d3b625be080fe213640e715eaf242" Feb 17 18:30:06 crc kubenswrapper[4680]: E0217 18:30:06.614817 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18105ca0b308bd60323b8dde5f180692ba6d3b625be080fe213640e715eaf242\": container with ID starting with 18105ca0b308bd60323b8dde5f180692ba6d3b625be080fe213640e715eaf242 not found: ID does not exist" containerID="18105ca0b308bd60323b8dde5f180692ba6d3b625be080fe213640e715eaf242" Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.614926 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18105ca0b308bd60323b8dde5f180692ba6d3b625be080fe213640e715eaf242"} err="failed to get container status \"18105ca0b308bd60323b8dde5f180692ba6d3b625be080fe213640e715eaf242\": rpc error: code = NotFound desc = could not find container \"18105ca0b308bd60323b8dde5f180692ba6d3b625be080fe213640e715eaf242\": container with ID starting with 18105ca0b308bd60323b8dde5f180692ba6d3b625be080fe213640e715eaf242 not found: ID does not exist" Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.615022 4680 scope.go:117] "RemoveContainer" containerID="556c183ada86bcd5806292728e4d464f133a480c765086d749a68417d92ada06" Feb 17 18:30:06 crc kubenswrapper[4680]: E0217 18:30:06.615635 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"556c183ada86bcd5806292728e4d464f133a480c765086d749a68417d92ada06\": container with ID starting with 556c183ada86bcd5806292728e4d464f133a480c765086d749a68417d92ada06 not found: ID does not exist" containerID="556c183ada86bcd5806292728e4d464f133a480c765086d749a68417d92ada06" Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.615676 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"556c183ada86bcd5806292728e4d464f133a480c765086d749a68417d92ada06"} err="failed to get container status \"556c183ada86bcd5806292728e4d464f133a480c765086d749a68417d92ada06\": rpc error: code = NotFound desc = could not find container \"556c183ada86bcd5806292728e4d464f133a480c765086d749a68417d92ada06\": container with ID starting with 556c183ada86bcd5806292728e4d464f133a480c765086d749a68417d92ada06 not found: ID does not exist" Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.615702 4680 scope.go:117] "RemoveContainer" containerID="3bf398600688ba7d3aae1f3ee7c0dfbcf403de8c12bb7c2c413d75b7eab6e710" Feb 17 18:30:06 crc kubenswrapper[4680]: E0217 18:30:06.616177 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bf398600688ba7d3aae1f3ee7c0dfbcf403de8c12bb7c2c413d75b7eab6e710\": container with ID starting with 3bf398600688ba7d3aae1f3ee7c0dfbcf403de8c12bb7c2c413d75b7eab6e710 not found: ID does not exist" containerID="3bf398600688ba7d3aae1f3ee7c0dfbcf403de8c12bb7c2c413d75b7eab6e710" Feb 17 18:30:06 crc kubenswrapper[4680]: I0217 18:30:06.616207 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bf398600688ba7d3aae1f3ee7c0dfbcf403de8c12bb7c2c413d75b7eab6e710"} err="failed to get container status \"3bf398600688ba7d3aae1f3ee7c0dfbcf403de8c12bb7c2c413d75b7eab6e710\": rpc error: code = NotFound desc = could not find container \"3bf398600688ba7d3aae1f3ee7c0dfbcf403de8c12bb7c2c413d75b7eab6e710\": container with ID starting with 3bf398600688ba7d3aae1f3ee7c0dfbcf403de8c12bb7c2c413d75b7eab6e710 not found: ID does not exist" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.124058 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c93f8c-eba4-48f2-9e5f-19004460d393" path="/var/lib/kubelet/pods/b2c93f8c-eba4-48f2-9e5f-19004460d393/volumes" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.154570 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 17 18:30:07 crc kubenswrapper[4680]: E0217 18:30:07.155029 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb451da-2120-45f1-91d9-8f9a0125dc3e" containerName="extract-content" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.155043 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb451da-2120-45f1-91d9-8f9a0125dc3e" containerName="extract-content" Feb 17 18:30:07 crc kubenswrapper[4680]: E0217 18:30:07.155057 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c93f8c-eba4-48f2-9e5f-19004460d393" containerName="registry-server" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.155066 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c93f8c-eba4-48f2-9e5f-19004460d393" containerName="registry-server" Feb 17 18:30:07 crc kubenswrapper[4680]: E0217 18:30:07.155081 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c93f8c-eba4-48f2-9e5f-19004460d393" containerName="extract-utilities" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.155089 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c93f8c-eba4-48f2-9e5f-19004460d393" containerName="extract-utilities" Feb 17 18:30:07 crc kubenswrapper[4680]: E0217 18:30:07.155108 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b06e1201-c49f-4cd4-a794-5beecc1223af" containerName="collect-profiles" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.155116 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b06e1201-c49f-4cd4-a794-5beecc1223af" containerName="collect-profiles" Feb 17 18:30:07 crc kubenswrapper[4680]: E0217 18:30:07.155129 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb451da-2120-45f1-91d9-8f9a0125dc3e" containerName="registry-server" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.155138 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb451da-2120-45f1-91d9-8f9a0125dc3e" containerName="registry-server" Feb 17 18:30:07 crc kubenswrapper[4680]: E0217 18:30:07.155153 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c93f8c-eba4-48f2-9e5f-19004460d393" containerName="extract-content" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.155162 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c93f8c-eba4-48f2-9e5f-19004460d393" containerName="extract-content" Feb 17 18:30:07 crc kubenswrapper[4680]: E0217 18:30:07.155189 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb451da-2120-45f1-91d9-8f9a0125dc3e" containerName="extract-utilities" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.155195 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb451da-2120-45f1-91d9-8f9a0125dc3e" containerName="extract-utilities" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.155403 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="b06e1201-c49f-4cd4-a794-5beecc1223af" containerName="collect-profiles" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.155421 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fb451da-2120-45f1-91d9-8f9a0125dc3e" containerName="registry-server" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.155435 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2c93f8c-eba4-48f2-9e5f-19004460d393" containerName="registry-server" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.156056 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.159551 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.159891 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.160583 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vvjj6" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.161015 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.175909 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.304114 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.304204 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/327c58a4-13db-4bf2-b6e4-755c23c2a36f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.304235 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.304262 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/327c58a4-13db-4bf2-b6e4-755c23c2a36f-config-data\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.304301 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/327c58a4-13db-4bf2-b6e4-755c23c2a36f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.304362 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jplwl\" (UniqueName: \"kubernetes.io/projected/327c58a4-13db-4bf2-b6e4-755c23c2a36f-kube-api-access-jplwl\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.304392 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/327c58a4-13db-4bf2-b6e4-755c23c2a36f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.304418 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.304452 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.406308 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.406569 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.406655 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/327c58a4-13db-4bf2-b6e4-755c23c2a36f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.406691 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.406725 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/327c58a4-13db-4bf2-b6e4-755c23c2a36f-config-data\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.406788 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/327c58a4-13db-4bf2-b6e4-755c23c2a36f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.406853 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jplwl\" (UniqueName: \"kubernetes.io/projected/327c58a4-13db-4bf2-b6e4-755c23c2a36f-kube-api-access-jplwl\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.406895 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/327c58a4-13db-4bf2-b6e4-755c23c2a36f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.406933 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.407067 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.407925 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/327c58a4-13db-4bf2-b6e4-755c23c2a36f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.407954 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/327c58a4-13db-4bf2-b6e4-755c23c2a36f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.408287 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/327c58a4-13db-4bf2-b6e4-755c23c2a36f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.408787 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/327c58a4-13db-4bf2-b6e4-755c23c2a36f-config-data\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.414528 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.415434 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.421394 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.426092 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jplwl\" (UniqueName: \"kubernetes.io/projected/327c58a4-13db-4bf2-b6e4-755c23c2a36f-kube-api-access-jplwl\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.440073 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.479771 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 17 18:30:07 crc kubenswrapper[4680]: I0217 18:30:07.987669 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 17 18:30:08 crc kubenswrapper[4680]: I0217 18:30:08.592596 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"327c58a4-13db-4bf2-b6e4-755c23c2a36f","Type":"ContainerStarted","Data":"4d4c291692acc2369859ae51763708f4c15ce1b79cc25558629ad3f94173f586"} Feb 17 18:30:41 crc kubenswrapper[4680]: E0217 18:30:41.747463 4680 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 17 18:30:41 crc kubenswrapper[4680]: E0217 18:30:41.751404 4680 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jplwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(327c58a4-13db-4bf2-b6e4-755c23c2a36f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 18:30:41 crc kubenswrapper[4680]: E0217 18:30:41.752742 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="327c58a4-13db-4bf2-b6e4-755c23c2a36f" Feb 17 18:30:41 crc kubenswrapper[4680]: E0217 18:30:41.911505 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="327c58a4-13db-4bf2-b6e4-755c23c2a36f" Feb 17 18:30:55 crc kubenswrapper[4680]: I0217 18:30:55.414558 4680 scope.go:117] "RemoveContainer" containerID="e7dfc9b6c735a791e5e2cc87e2682d01f1809c7bdfbf11c2c9ddfb7fa72e8119" Feb 17 18:30:57 crc kubenswrapper[4680]: I0217 18:30:57.702345 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 17 18:30:59 crc kubenswrapper[4680]: I0217 18:30:59.043535 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"327c58a4-13db-4bf2-b6e4-755c23c2a36f","Type":"ContainerStarted","Data":"0d9da85c7ef970499b8dba2e2e88d3281ae90a76d93ded80046905a1300c3b51"} Feb 17 18:31:00 crc kubenswrapper[4680]: I0217 18:31:00.073837 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.378731088 podStartE2EDuration="54.073817654s" podCreationTimestamp="2026-02-17 18:30:06 +0000 UTC" firstStartedPulling="2026-02-17 18:30:08.00413336 +0000 UTC m=+2737.555032029" lastFinishedPulling="2026-02-17 18:30:57.699219916 +0000 UTC m=+2787.250118595" observedRunningTime="2026-02-17 18:31:00.065870213 +0000 UTC m=+2789.616768912" watchObservedRunningTime="2026-02-17 18:31:00.073817654 +0000 UTC m=+2789.624716343" Feb 17 18:31:35 crc kubenswrapper[4680]: I0217 18:31:35.185805 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q5g48"] Feb 17 18:31:35 crc kubenswrapper[4680]: I0217 18:31:35.188139 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5g48" Feb 17 18:31:35 crc kubenswrapper[4680]: I0217 18:31:35.251284 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q5g48"] Feb 17 18:31:35 crc kubenswrapper[4680]: I0217 18:31:35.301604 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-utilities\") pod \"certified-operators-q5g48\" (UID: \"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03\") " pod="openshift-marketplace/certified-operators-q5g48" Feb 17 18:31:35 crc kubenswrapper[4680]: I0217 18:31:35.304297 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbdsn\" (UniqueName: \"kubernetes.io/projected/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-kube-api-access-qbdsn\") pod \"certified-operators-q5g48\" (UID: \"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03\") " pod="openshift-marketplace/certified-operators-q5g48" Feb 17 18:31:35 crc kubenswrapper[4680]: I0217 18:31:35.304403 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-catalog-content\") pod \"certified-operators-q5g48\" (UID: \"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03\") " pod="openshift-marketplace/certified-operators-q5g48" Feb 17 18:31:35 crc kubenswrapper[4680]: I0217 18:31:35.406106 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbdsn\" (UniqueName: \"kubernetes.io/projected/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-kube-api-access-qbdsn\") pod \"certified-operators-q5g48\" (UID: \"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03\") " pod="openshift-marketplace/certified-operators-q5g48" Feb 17 18:31:35 crc kubenswrapper[4680]: I0217 18:31:35.406160 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-catalog-content\") pod \"certified-operators-q5g48\" (UID: \"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03\") " pod="openshift-marketplace/certified-operators-q5g48" Feb 17 18:31:35 crc kubenswrapper[4680]: I0217 18:31:35.406205 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-utilities\") pod \"certified-operators-q5g48\" (UID: \"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03\") " pod="openshift-marketplace/certified-operators-q5g48" Feb 17 18:31:35 crc kubenswrapper[4680]: I0217 18:31:35.406748 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-catalog-content\") pod \"certified-operators-q5g48\" (UID: \"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03\") " pod="openshift-marketplace/certified-operators-q5g48" Feb 17 18:31:35 crc kubenswrapper[4680]: I0217 18:31:35.406762 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-utilities\") pod \"certified-operators-q5g48\" (UID: \"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03\") " pod="openshift-marketplace/certified-operators-q5g48" Feb 17 18:31:35 crc kubenswrapper[4680]: I0217 18:31:35.432628 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbdsn\" (UniqueName: \"kubernetes.io/projected/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-kube-api-access-qbdsn\") pod \"certified-operators-q5g48\" (UID: \"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03\") " pod="openshift-marketplace/certified-operators-q5g48" Feb 17 18:31:35 crc kubenswrapper[4680]: I0217 18:31:35.508064 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5g48" Feb 17 18:31:35 crc kubenswrapper[4680]: I0217 18:31:35.588681 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:31:35 crc kubenswrapper[4680]: I0217 18:31:35.588743 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:31:36 crc kubenswrapper[4680]: I0217 18:31:36.780493 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q5g48"] Feb 17 18:31:37 crc kubenswrapper[4680]: I0217 18:31:37.599943 4680 generic.go:334] "Generic (PLEG): container finished" podID="84f7b3f2-7b45-4503-8f31-4f8ecff6fd03" containerID="81c20fb859e9225000c7e3cc5f5c9213ce74dd8d3f1517b05391d4b5405b557f" exitCode=0 Feb 17 18:31:37 crc kubenswrapper[4680]: I0217 18:31:37.600021 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5g48" event={"ID":"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03","Type":"ContainerDied","Data":"81c20fb859e9225000c7e3cc5f5c9213ce74dd8d3f1517b05391d4b5405b557f"} Feb 17 18:31:37 crc kubenswrapper[4680]: I0217 18:31:37.600331 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5g48" event={"ID":"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03","Type":"ContainerStarted","Data":"537bacb18a63708291d2080e4b21f1ffc50169683fe178601ff22da676194dc0"} Feb 17 18:31:39 crc kubenswrapper[4680]: I0217 18:31:39.622391 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5g48" event={"ID":"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03","Type":"ContainerStarted","Data":"d791b1218b102e2be26e2025fa86b353b31a31775246048f077cf2391eff5a47"} Feb 17 18:31:41 crc kubenswrapper[4680]: I0217 18:31:41.640079 4680 generic.go:334] "Generic (PLEG): container finished" podID="84f7b3f2-7b45-4503-8f31-4f8ecff6fd03" containerID="d791b1218b102e2be26e2025fa86b353b31a31775246048f077cf2391eff5a47" exitCode=0 Feb 17 18:31:41 crc kubenswrapper[4680]: I0217 18:31:41.640240 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5g48" event={"ID":"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03","Type":"ContainerDied","Data":"d791b1218b102e2be26e2025fa86b353b31a31775246048f077cf2391eff5a47"} Feb 17 18:31:42 crc kubenswrapper[4680]: I0217 18:31:42.652632 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5g48" event={"ID":"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03","Type":"ContainerStarted","Data":"8a93703d3f90a075d7595441ee8baba51f64874b375cb9730e6cd8fb2a86d73f"} Feb 17 18:31:42 crc kubenswrapper[4680]: I0217 18:31:42.675063 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q5g48" podStartSLOduration=3.213922629 podStartE2EDuration="7.6750455s" podCreationTimestamp="2026-02-17 18:31:35 +0000 UTC" firstStartedPulling="2026-02-17 18:31:37.601956405 +0000 UTC m=+2827.152855084" lastFinishedPulling="2026-02-17 18:31:42.063079276 +0000 UTC m=+2831.613977955" observedRunningTime="2026-02-17 18:31:42.671139808 +0000 UTC m=+2832.222038497" watchObservedRunningTime="2026-02-17 18:31:42.6750455 +0000 UTC m=+2832.225944179" Feb 17 18:31:45 crc kubenswrapper[4680]: I0217 18:31:45.508608 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q5g48" Feb 17 18:31:45 crc kubenswrapper[4680]: I0217 18:31:45.508995 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q5g48" Feb 17 18:31:45 crc kubenswrapper[4680]: I0217 18:31:45.562841 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q5g48" Feb 17 18:31:55 crc kubenswrapper[4680]: I0217 18:31:55.572996 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q5g48" Feb 17 18:31:55 crc kubenswrapper[4680]: I0217 18:31:55.638057 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q5g48"] Feb 17 18:31:55 crc kubenswrapper[4680]: I0217 18:31:55.752218 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q5g48" podUID="84f7b3f2-7b45-4503-8f31-4f8ecff6fd03" containerName="registry-server" containerID="cri-o://8a93703d3f90a075d7595441ee8baba51f64874b375cb9730e6cd8fb2a86d73f" gracePeriod=2 Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.330335 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5g48" Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.397631 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbdsn\" (UniqueName: \"kubernetes.io/projected/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-kube-api-access-qbdsn\") pod \"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03\" (UID: \"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03\") " Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.397751 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-utilities\") pod \"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03\" (UID: \"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03\") " Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.397837 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-catalog-content\") pod \"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03\" (UID: \"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03\") " Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.398413 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-utilities" (OuterVolumeSpecName: "utilities") pod "84f7b3f2-7b45-4503-8f31-4f8ecff6fd03" (UID: "84f7b3f2-7b45-4503-8f31-4f8ecff6fd03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.411738 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-kube-api-access-qbdsn" (OuterVolumeSpecName: "kube-api-access-qbdsn") pod "84f7b3f2-7b45-4503-8f31-4f8ecff6fd03" (UID: "84f7b3f2-7b45-4503-8f31-4f8ecff6fd03"). InnerVolumeSpecName "kube-api-access-qbdsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.500503 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbdsn\" (UniqueName: \"kubernetes.io/projected/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-kube-api-access-qbdsn\") on node \"crc\" DevicePath \"\"" Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.500751 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.505222 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84f7b3f2-7b45-4503-8f31-4f8ecff6fd03" (UID: "84f7b3f2-7b45-4503-8f31-4f8ecff6fd03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.602982 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.760984 4680 generic.go:334] "Generic (PLEG): container finished" podID="84f7b3f2-7b45-4503-8f31-4f8ecff6fd03" containerID="8a93703d3f90a075d7595441ee8baba51f64874b375cb9730e6cd8fb2a86d73f" exitCode=0 Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.761027 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5g48" event={"ID":"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03","Type":"ContainerDied","Data":"8a93703d3f90a075d7595441ee8baba51f64874b375cb9730e6cd8fb2a86d73f"} Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.761052 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5g48" event={"ID":"84f7b3f2-7b45-4503-8f31-4f8ecff6fd03","Type":"ContainerDied","Data":"537bacb18a63708291d2080e4b21f1ffc50169683fe178601ff22da676194dc0"} Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.761069 4680 scope.go:117] "RemoveContainer" containerID="8a93703d3f90a075d7595441ee8baba51f64874b375cb9730e6cd8fb2a86d73f" Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.761191 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5g48" Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.781895 4680 scope.go:117] "RemoveContainer" containerID="d791b1218b102e2be26e2025fa86b353b31a31775246048f077cf2391eff5a47" Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.796199 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q5g48"] Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.818645 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q5g48"] Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.839640 4680 scope.go:117] "RemoveContainer" containerID="81c20fb859e9225000c7e3cc5f5c9213ce74dd8d3f1517b05391d4b5405b557f" Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.874466 4680 scope.go:117] "RemoveContainer" containerID="8a93703d3f90a075d7595441ee8baba51f64874b375cb9730e6cd8fb2a86d73f" Feb 17 18:31:56 crc kubenswrapper[4680]: E0217 18:31:56.878359 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a93703d3f90a075d7595441ee8baba51f64874b375cb9730e6cd8fb2a86d73f\": container with ID starting with 8a93703d3f90a075d7595441ee8baba51f64874b375cb9730e6cd8fb2a86d73f not found: ID does not exist" containerID="8a93703d3f90a075d7595441ee8baba51f64874b375cb9730e6cd8fb2a86d73f" Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.878418 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a93703d3f90a075d7595441ee8baba51f64874b375cb9730e6cd8fb2a86d73f"} err="failed to get container status \"8a93703d3f90a075d7595441ee8baba51f64874b375cb9730e6cd8fb2a86d73f\": rpc error: code = NotFound desc = could not find container \"8a93703d3f90a075d7595441ee8baba51f64874b375cb9730e6cd8fb2a86d73f\": container with ID starting with 8a93703d3f90a075d7595441ee8baba51f64874b375cb9730e6cd8fb2a86d73f not found: ID does not exist" Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.878451 4680 scope.go:117] "RemoveContainer" containerID="d791b1218b102e2be26e2025fa86b353b31a31775246048f077cf2391eff5a47" Feb 17 18:31:56 crc kubenswrapper[4680]: E0217 18:31:56.878793 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d791b1218b102e2be26e2025fa86b353b31a31775246048f077cf2391eff5a47\": container with ID starting with d791b1218b102e2be26e2025fa86b353b31a31775246048f077cf2391eff5a47 not found: ID does not exist" containerID="d791b1218b102e2be26e2025fa86b353b31a31775246048f077cf2391eff5a47" Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.878824 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d791b1218b102e2be26e2025fa86b353b31a31775246048f077cf2391eff5a47"} err="failed to get container status \"d791b1218b102e2be26e2025fa86b353b31a31775246048f077cf2391eff5a47\": rpc error: code = NotFound desc = could not find container \"d791b1218b102e2be26e2025fa86b353b31a31775246048f077cf2391eff5a47\": container with ID starting with d791b1218b102e2be26e2025fa86b353b31a31775246048f077cf2391eff5a47 not found: ID does not exist" Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.878842 4680 scope.go:117] "RemoveContainer" containerID="81c20fb859e9225000c7e3cc5f5c9213ce74dd8d3f1517b05391d4b5405b557f" Feb 17 18:31:56 crc kubenswrapper[4680]: E0217 18:31:56.879235 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81c20fb859e9225000c7e3cc5f5c9213ce74dd8d3f1517b05391d4b5405b557f\": container with ID starting with 81c20fb859e9225000c7e3cc5f5c9213ce74dd8d3f1517b05391d4b5405b557f not found: ID does not exist" containerID="81c20fb859e9225000c7e3cc5f5c9213ce74dd8d3f1517b05391d4b5405b557f" Feb 17 18:31:56 crc kubenswrapper[4680]: I0217 18:31:56.879262 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81c20fb859e9225000c7e3cc5f5c9213ce74dd8d3f1517b05391d4b5405b557f"} err="failed to get container status \"81c20fb859e9225000c7e3cc5f5c9213ce74dd8d3f1517b05391d4b5405b557f\": rpc error: code = NotFound desc = could not find container \"81c20fb859e9225000c7e3cc5f5c9213ce74dd8d3f1517b05391d4b5405b557f\": container with ID starting with 81c20fb859e9225000c7e3cc5f5c9213ce74dd8d3f1517b05391d4b5405b557f not found: ID does not exist" Feb 17 18:31:57 crc kubenswrapper[4680]: I0217 18:31:57.114909 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84f7b3f2-7b45-4503-8f31-4f8ecff6fd03" path="/var/lib/kubelet/pods/84f7b3f2-7b45-4503-8f31-4f8ecff6fd03/volumes" Feb 17 18:32:05 crc kubenswrapper[4680]: I0217 18:32:05.588620 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:32:05 crc kubenswrapper[4680]: I0217 18:32:05.589219 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:32:35 crc kubenswrapper[4680]: I0217 18:32:35.589271 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:32:35 crc kubenswrapper[4680]: I0217 18:32:35.589906 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:32:35 crc kubenswrapper[4680]: I0217 18:32:35.589954 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 18:32:35 crc kubenswrapper[4680]: I0217 18:32:35.590705 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 18:32:35 crc kubenswrapper[4680]: I0217 18:32:35.590769 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" gracePeriod=600 Feb 17 18:32:35 crc kubenswrapper[4680]: E0217 18:32:35.715208 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:32:36 crc kubenswrapper[4680]: I0217 18:32:36.113621 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" exitCode=0 Feb 17 18:32:36 crc kubenswrapper[4680]: I0217 18:32:36.113661 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262"} Feb 17 18:32:36 crc kubenswrapper[4680]: I0217 18:32:36.113927 4680 scope.go:117] "RemoveContainer" containerID="67e68953afa70326e197284902024efbdd6d091acf5e877d90934e6fbf164508" Feb 17 18:32:36 crc kubenswrapper[4680]: I0217 18:32:36.114479 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:32:36 crc kubenswrapper[4680]: E0217 18:32:36.114738 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.106185 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:32:48 crc kubenswrapper[4680]: E0217 18:32:48.107121 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.545065 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hdprk"] Feb 17 18:32:48 crc kubenswrapper[4680]: E0217 18:32:48.545550 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84f7b3f2-7b45-4503-8f31-4f8ecff6fd03" containerName="extract-content" Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.545574 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="84f7b3f2-7b45-4503-8f31-4f8ecff6fd03" containerName="extract-content" Feb 17 18:32:48 crc kubenswrapper[4680]: E0217 18:32:48.545596 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84f7b3f2-7b45-4503-8f31-4f8ecff6fd03" containerName="registry-server" Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.545604 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="84f7b3f2-7b45-4503-8f31-4f8ecff6fd03" containerName="registry-server" Feb 17 18:32:48 crc kubenswrapper[4680]: E0217 18:32:48.545642 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84f7b3f2-7b45-4503-8f31-4f8ecff6fd03" containerName="extract-utilities" Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.545652 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="84f7b3f2-7b45-4503-8f31-4f8ecff6fd03" containerName="extract-utilities" Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.545900 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="84f7b3f2-7b45-4503-8f31-4f8ecff6fd03" containerName="registry-server" Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.547581 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdprk" Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.565140 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hdprk"] Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.651668 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-catalog-content\") pod \"redhat-operators-hdprk\" (UID: \"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a\") " pod="openshift-marketplace/redhat-operators-hdprk" Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.651731 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-utilities\") pod \"redhat-operators-hdprk\" (UID: \"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a\") " pod="openshift-marketplace/redhat-operators-hdprk" Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.651850 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8zfp\" (UniqueName: \"kubernetes.io/projected/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-kube-api-access-s8zfp\") pod \"redhat-operators-hdprk\" (UID: \"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a\") " pod="openshift-marketplace/redhat-operators-hdprk" Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.753458 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-catalog-content\") pod \"redhat-operators-hdprk\" (UID: \"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a\") " pod="openshift-marketplace/redhat-operators-hdprk" Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.753537 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-utilities\") pod \"redhat-operators-hdprk\" (UID: \"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a\") " pod="openshift-marketplace/redhat-operators-hdprk" Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.753650 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8zfp\" (UniqueName: \"kubernetes.io/projected/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-kube-api-access-s8zfp\") pod \"redhat-operators-hdprk\" (UID: \"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a\") " pod="openshift-marketplace/redhat-operators-hdprk" Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.754076 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-catalog-content\") pod \"redhat-operators-hdprk\" (UID: \"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a\") " pod="openshift-marketplace/redhat-operators-hdprk" Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.754406 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-utilities\") pod \"redhat-operators-hdprk\" (UID: \"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a\") " pod="openshift-marketplace/redhat-operators-hdprk" Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.772985 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8zfp\" (UniqueName: \"kubernetes.io/projected/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-kube-api-access-s8zfp\") pod \"redhat-operators-hdprk\" (UID: \"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a\") " pod="openshift-marketplace/redhat-operators-hdprk" Feb 17 18:32:48 crc kubenswrapper[4680]: I0217 18:32:48.876485 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdprk" Feb 17 18:32:49 crc kubenswrapper[4680]: I0217 18:32:49.348023 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hdprk"] Feb 17 18:32:50 crc kubenswrapper[4680]: I0217 18:32:50.246956 4680 generic.go:334] "Generic (PLEG): container finished" podID="5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" containerID="e9e2aa5ecdd2b232eb4e4da5ea4fd6d1ce8b2d98c71ad5ddbb69a68ab988d86c" exitCode=0 Feb 17 18:32:50 crc kubenswrapper[4680]: I0217 18:32:50.247022 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdprk" event={"ID":"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a","Type":"ContainerDied","Data":"e9e2aa5ecdd2b232eb4e4da5ea4fd6d1ce8b2d98c71ad5ddbb69a68ab988d86c"} Feb 17 18:32:50 crc kubenswrapper[4680]: I0217 18:32:50.247261 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdprk" event={"ID":"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a","Type":"ContainerStarted","Data":"b88a9ed3631c9463e38074f9364277a8c87c6deb859286472ac2755504fa029f"} Feb 17 18:32:51 crc kubenswrapper[4680]: I0217 18:32:51.264058 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdprk" event={"ID":"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a","Type":"ContainerStarted","Data":"7dcda90e5b6d0b1a505131374ff412c0af0b0e49d0c02b8595c40de27c0085a3"} Feb 17 18:32:56 crc kubenswrapper[4680]: I0217 18:32:56.308591 4680 generic.go:334] "Generic (PLEG): container finished" podID="5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" containerID="7dcda90e5b6d0b1a505131374ff412c0af0b0e49d0c02b8595c40de27c0085a3" exitCode=0 Feb 17 18:32:56 crc kubenswrapper[4680]: I0217 18:32:56.308645 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdprk" event={"ID":"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a","Type":"ContainerDied","Data":"7dcda90e5b6d0b1a505131374ff412c0af0b0e49d0c02b8595c40de27c0085a3"} Feb 17 18:32:57 crc kubenswrapper[4680]: I0217 18:32:57.330162 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdprk" event={"ID":"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a","Type":"ContainerStarted","Data":"4c20c831a1789786671312803f7f289ed4070eb35e82b3847f41cf23a9a934a1"} Feb 17 18:32:58 crc kubenswrapper[4680]: I0217 18:32:58.876906 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hdprk" Feb 17 18:32:58 crc kubenswrapper[4680]: I0217 18:32:58.877990 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hdprk" Feb 17 18:32:59 crc kubenswrapper[4680]: I0217 18:32:59.927348 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hdprk" podUID="5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" containerName="registry-server" probeResult="failure" output=< Feb 17 18:32:59 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 18:32:59 crc kubenswrapper[4680]: > Feb 17 18:33:03 crc kubenswrapper[4680]: I0217 18:33:03.110776 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:33:03 crc kubenswrapper[4680]: E0217 18:33:03.111238 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:33:09 crc kubenswrapper[4680]: I0217 18:33:09.935626 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hdprk" podUID="5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" containerName="registry-server" probeResult="failure" output=< Feb 17 18:33:09 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 18:33:09 crc kubenswrapper[4680]: > Feb 17 18:33:14 crc kubenswrapper[4680]: I0217 18:33:14.105667 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:33:14 crc kubenswrapper[4680]: E0217 18:33:14.106507 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:33:19 crc kubenswrapper[4680]: I0217 18:33:19.926391 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hdprk" podUID="5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" containerName="registry-server" probeResult="failure" output=< Feb 17 18:33:19 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 18:33:19 crc kubenswrapper[4680]: > Feb 17 18:33:28 crc kubenswrapper[4680]: I0217 18:33:28.921587 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hdprk" Feb 17 18:33:28 crc kubenswrapper[4680]: I0217 18:33:28.936993 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hdprk" podStartSLOduration=34.444629236 podStartE2EDuration="40.936975091s" podCreationTimestamp="2026-02-17 18:32:48 +0000 UTC" firstStartedPulling="2026-02-17 18:32:50.249656425 +0000 UTC m=+2899.800555104" lastFinishedPulling="2026-02-17 18:32:56.74200228 +0000 UTC m=+2906.292900959" observedRunningTime="2026-02-17 18:32:57.353573422 +0000 UTC m=+2906.904472101" watchObservedRunningTime="2026-02-17 18:33:28.936975091 +0000 UTC m=+2938.487873770" Feb 17 18:33:28 crc kubenswrapper[4680]: I0217 18:33:28.988366 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hdprk" Feb 17 18:33:29 crc kubenswrapper[4680]: I0217 18:33:29.105067 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:33:29 crc kubenswrapper[4680]: E0217 18:33:29.105370 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:33:29 crc kubenswrapper[4680]: I0217 18:33:29.154006 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hdprk"] Feb 17 18:33:30 crc kubenswrapper[4680]: I0217 18:33:30.595399 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hdprk" podUID="5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" containerName="registry-server" containerID="cri-o://4c20c831a1789786671312803f7f289ed4070eb35e82b3847f41cf23a9a934a1" gracePeriod=2 Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.308330 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdprk" Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.352731 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-catalog-content\") pod \"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a\" (UID: \"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a\") " Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.352796 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8zfp\" (UniqueName: \"kubernetes.io/projected/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-kube-api-access-s8zfp\") pod \"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a\" (UID: \"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a\") " Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.352857 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-utilities\") pod \"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a\" (UID: \"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a\") " Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.353685 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-utilities" (OuterVolumeSpecName: "utilities") pod "5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" (UID: "5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.354417 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.366527 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-kube-api-access-s8zfp" (OuterVolumeSpecName: "kube-api-access-s8zfp") pod "5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" (UID: "5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a"). InnerVolumeSpecName "kube-api-access-s8zfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.455852 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8zfp\" (UniqueName: \"kubernetes.io/projected/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-kube-api-access-s8zfp\") on node \"crc\" DevicePath \"\"" Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.487307 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" (UID: "5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.557360 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.605325 4680 generic.go:334] "Generic (PLEG): container finished" podID="5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" containerID="4c20c831a1789786671312803f7f289ed4070eb35e82b3847f41cf23a9a934a1" exitCode=0 Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.605365 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdprk" event={"ID":"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a","Type":"ContainerDied","Data":"4c20c831a1789786671312803f7f289ed4070eb35e82b3847f41cf23a9a934a1"} Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.605393 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdprk" Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.605412 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdprk" event={"ID":"5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a","Type":"ContainerDied","Data":"b88a9ed3631c9463e38074f9364277a8c87c6deb859286472ac2755504fa029f"} Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.605438 4680 scope.go:117] "RemoveContainer" containerID="4c20c831a1789786671312803f7f289ed4070eb35e82b3847f41cf23a9a934a1" Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.634846 4680 scope.go:117] "RemoveContainer" containerID="7dcda90e5b6d0b1a505131374ff412c0af0b0e49d0c02b8595c40de27c0085a3" Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.639007 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hdprk"] Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.648045 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hdprk"] Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.663247 4680 scope.go:117] "RemoveContainer" containerID="e9e2aa5ecdd2b232eb4e4da5ea4fd6d1ce8b2d98c71ad5ddbb69a68ab988d86c" Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.706414 4680 scope.go:117] "RemoveContainer" containerID="4c20c831a1789786671312803f7f289ed4070eb35e82b3847f41cf23a9a934a1" Feb 17 18:33:31 crc kubenswrapper[4680]: E0217 18:33:31.707016 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c20c831a1789786671312803f7f289ed4070eb35e82b3847f41cf23a9a934a1\": container with ID starting with 4c20c831a1789786671312803f7f289ed4070eb35e82b3847f41cf23a9a934a1 not found: ID does not exist" containerID="4c20c831a1789786671312803f7f289ed4070eb35e82b3847f41cf23a9a934a1" Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.707046 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c20c831a1789786671312803f7f289ed4070eb35e82b3847f41cf23a9a934a1"} err="failed to get container status \"4c20c831a1789786671312803f7f289ed4070eb35e82b3847f41cf23a9a934a1\": rpc error: code = NotFound desc = could not find container \"4c20c831a1789786671312803f7f289ed4070eb35e82b3847f41cf23a9a934a1\": container with ID starting with 4c20c831a1789786671312803f7f289ed4070eb35e82b3847f41cf23a9a934a1 not found: ID does not exist" Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.707066 4680 scope.go:117] "RemoveContainer" containerID="7dcda90e5b6d0b1a505131374ff412c0af0b0e49d0c02b8595c40de27c0085a3" Feb 17 18:33:31 crc kubenswrapper[4680]: E0217 18:33:31.708241 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dcda90e5b6d0b1a505131374ff412c0af0b0e49d0c02b8595c40de27c0085a3\": container with ID starting with 7dcda90e5b6d0b1a505131374ff412c0af0b0e49d0c02b8595c40de27c0085a3 not found: ID does not exist" containerID="7dcda90e5b6d0b1a505131374ff412c0af0b0e49d0c02b8595c40de27c0085a3" Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.708281 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dcda90e5b6d0b1a505131374ff412c0af0b0e49d0c02b8595c40de27c0085a3"} err="failed to get container status \"7dcda90e5b6d0b1a505131374ff412c0af0b0e49d0c02b8595c40de27c0085a3\": rpc error: code = NotFound desc = could not find container \"7dcda90e5b6d0b1a505131374ff412c0af0b0e49d0c02b8595c40de27c0085a3\": container with ID starting with 7dcda90e5b6d0b1a505131374ff412c0af0b0e49d0c02b8595c40de27c0085a3 not found: ID does not exist" Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.708310 4680 scope.go:117] "RemoveContainer" containerID="e9e2aa5ecdd2b232eb4e4da5ea4fd6d1ce8b2d98c71ad5ddbb69a68ab988d86c" Feb 17 18:33:31 crc kubenswrapper[4680]: E0217 18:33:31.708836 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9e2aa5ecdd2b232eb4e4da5ea4fd6d1ce8b2d98c71ad5ddbb69a68ab988d86c\": container with ID starting with e9e2aa5ecdd2b232eb4e4da5ea4fd6d1ce8b2d98c71ad5ddbb69a68ab988d86c not found: ID does not exist" containerID="e9e2aa5ecdd2b232eb4e4da5ea4fd6d1ce8b2d98c71ad5ddbb69a68ab988d86c" Feb 17 18:33:31 crc kubenswrapper[4680]: I0217 18:33:31.708877 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9e2aa5ecdd2b232eb4e4da5ea4fd6d1ce8b2d98c71ad5ddbb69a68ab988d86c"} err="failed to get container status \"e9e2aa5ecdd2b232eb4e4da5ea4fd6d1ce8b2d98c71ad5ddbb69a68ab988d86c\": rpc error: code = NotFound desc = could not find container \"e9e2aa5ecdd2b232eb4e4da5ea4fd6d1ce8b2d98c71ad5ddbb69a68ab988d86c\": container with ID starting with e9e2aa5ecdd2b232eb4e4da5ea4fd6d1ce8b2d98c71ad5ddbb69a68ab988d86c not found: ID does not exist" Feb 17 18:33:33 crc kubenswrapper[4680]: I0217 18:33:33.117688 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" path="/var/lib/kubelet/pods/5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a/volumes" Feb 17 18:33:43 crc kubenswrapper[4680]: I0217 18:33:43.105413 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:33:43 crc kubenswrapper[4680]: E0217 18:33:43.106128 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:33:57 crc kubenswrapper[4680]: I0217 18:33:57.105835 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:33:57 crc kubenswrapper[4680]: E0217 18:33:57.106600 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:34:09 crc kubenswrapper[4680]: I0217 18:34:09.105619 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:34:09 crc kubenswrapper[4680]: E0217 18:34:09.106481 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:34:21 crc kubenswrapper[4680]: I0217 18:34:21.113569 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:34:21 crc kubenswrapper[4680]: E0217 18:34:21.114337 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:34:36 crc kubenswrapper[4680]: I0217 18:34:36.106045 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:34:36 crc kubenswrapper[4680]: E0217 18:34:36.106807 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:34:50 crc kubenswrapper[4680]: I0217 18:34:50.106897 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:34:50 crc kubenswrapper[4680]: E0217 18:34:50.107627 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:35:03 crc kubenswrapper[4680]: I0217 18:35:03.105954 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:35:03 crc kubenswrapper[4680]: E0217 18:35:03.106840 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:35:16 crc kubenswrapper[4680]: I0217 18:35:16.105633 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:35:16 crc kubenswrapper[4680]: E0217 18:35:16.106455 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:35:28 crc kubenswrapper[4680]: I0217 18:35:28.106448 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:35:28 crc kubenswrapper[4680]: E0217 18:35:28.107089 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:35:40 crc kubenswrapper[4680]: I0217 18:35:40.105395 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:35:40 crc kubenswrapper[4680]: E0217 18:35:40.107043 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:35:55 crc kubenswrapper[4680]: I0217 18:35:55.105916 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:35:55 crc kubenswrapper[4680]: E0217 18:35:55.106635 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:36:07 crc kubenswrapper[4680]: I0217 18:36:07.105541 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:36:07 crc kubenswrapper[4680]: E0217 18:36:07.106354 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:36:22 crc kubenswrapper[4680]: I0217 18:36:22.105622 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:36:22 crc kubenswrapper[4680]: E0217 18:36:22.106392 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:36:35 crc kubenswrapper[4680]: I0217 18:36:35.106061 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:36:35 crc kubenswrapper[4680]: E0217 18:36:35.106855 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:36:47 crc kubenswrapper[4680]: I0217 18:36:47.105408 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:36:47 crc kubenswrapper[4680]: E0217 18:36:47.106174 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:37:00 crc kubenswrapper[4680]: I0217 18:37:00.105342 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:37:00 crc kubenswrapper[4680]: E0217 18:37:00.106178 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:37:14 crc kubenswrapper[4680]: I0217 18:37:14.105020 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:37:14 crc kubenswrapper[4680]: E0217 18:37:14.105628 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:37:27 crc kubenswrapper[4680]: I0217 18:37:27.104916 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:37:27 crc kubenswrapper[4680]: E0217 18:37:27.106790 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:37:38 crc kubenswrapper[4680]: I0217 18:37:38.105788 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:37:38 crc kubenswrapper[4680]: I0217 18:37:38.665383 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"b02e6d4c7f88ab2b2c986289df7f8665a41abc86c42a850f0220d3636dfaedbf"} Feb 17 18:40:05 crc kubenswrapper[4680]: I0217 18:40:05.589233 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:40:05 crc kubenswrapper[4680]: I0217 18:40:05.589774 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:40:35 crc kubenswrapper[4680]: I0217 18:40:35.588658 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:40:35 crc kubenswrapper[4680]: I0217 18:40:35.590234 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:40:47 crc kubenswrapper[4680]: I0217 18:40:47.148921 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kbqbg"] Feb 17 18:40:47 crc kubenswrapper[4680]: E0217 18:40:47.150148 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" containerName="registry-server" Feb 17 18:40:47 crc kubenswrapper[4680]: I0217 18:40:47.150164 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" containerName="registry-server" Feb 17 18:40:47 crc kubenswrapper[4680]: E0217 18:40:47.150191 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" containerName="extract-utilities" Feb 17 18:40:47 crc kubenswrapper[4680]: I0217 18:40:47.150199 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" containerName="extract-utilities" Feb 17 18:40:47 crc kubenswrapper[4680]: E0217 18:40:47.150213 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" containerName="extract-content" Feb 17 18:40:47 crc kubenswrapper[4680]: I0217 18:40:47.150220 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" containerName="extract-content" Feb 17 18:40:47 crc kubenswrapper[4680]: I0217 18:40:47.150482 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e5aafc7-0f56-4a17-9d61-1ae2ba555c9a" containerName="registry-server" Feb 17 18:40:47 crc kubenswrapper[4680]: I0217 18:40:47.159260 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kbqbg" Feb 17 18:40:47 crc kubenswrapper[4680]: I0217 18:40:47.172612 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kbqbg"] Feb 17 18:40:47 crc kubenswrapper[4680]: I0217 18:40:47.275187 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2knjj\" (UniqueName: \"kubernetes.io/projected/9d30b3f0-4816-434f-8171-aa6597c5a958-kube-api-access-2knjj\") pod \"community-operators-kbqbg\" (UID: \"9d30b3f0-4816-434f-8171-aa6597c5a958\") " pod="openshift-marketplace/community-operators-kbqbg" Feb 17 18:40:47 crc kubenswrapper[4680]: I0217 18:40:47.275355 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d30b3f0-4816-434f-8171-aa6597c5a958-utilities\") pod \"community-operators-kbqbg\" (UID: \"9d30b3f0-4816-434f-8171-aa6597c5a958\") " pod="openshift-marketplace/community-operators-kbqbg" Feb 17 18:40:47 crc kubenswrapper[4680]: I0217 18:40:47.275501 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d30b3f0-4816-434f-8171-aa6597c5a958-catalog-content\") pod \"community-operators-kbqbg\" (UID: \"9d30b3f0-4816-434f-8171-aa6597c5a958\") " pod="openshift-marketplace/community-operators-kbqbg" Feb 17 18:40:47 crc kubenswrapper[4680]: I0217 18:40:47.377180 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d30b3f0-4816-434f-8171-aa6597c5a958-utilities\") pod \"community-operators-kbqbg\" (UID: \"9d30b3f0-4816-434f-8171-aa6597c5a958\") " pod="openshift-marketplace/community-operators-kbqbg" Feb 17 18:40:47 crc kubenswrapper[4680]: I0217 18:40:47.377635 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d30b3f0-4816-434f-8171-aa6597c5a958-catalog-content\") pod \"community-operators-kbqbg\" (UID: \"9d30b3f0-4816-434f-8171-aa6597c5a958\") " pod="openshift-marketplace/community-operators-kbqbg" Feb 17 18:40:47 crc kubenswrapper[4680]: I0217 18:40:47.377687 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d30b3f0-4816-434f-8171-aa6597c5a958-utilities\") pod \"community-operators-kbqbg\" (UID: \"9d30b3f0-4816-434f-8171-aa6597c5a958\") " pod="openshift-marketplace/community-operators-kbqbg" Feb 17 18:40:47 crc kubenswrapper[4680]: I0217 18:40:47.377996 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d30b3f0-4816-434f-8171-aa6597c5a958-catalog-content\") pod \"community-operators-kbqbg\" (UID: \"9d30b3f0-4816-434f-8171-aa6597c5a958\") " pod="openshift-marketplace/community-operators-kbqbg" Feb 17 18:40:47 crc kubenswrapper[4680]: I0217 18:40:47.378122 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2knjj\" (UniqueName: \"kubernetes.io/projected/9d30b3f0-4816-434f-8171-aa6597c5a958-kube-api-access-2knjj\") pod \"community-operators-kbqbg\" (UID: \"9d30b3f0-4816-434f-8171-aa6597c5a958\") " pod="openshift-marketplace/community-operators-kbqbg" Feb 17 18:40:47 crc kubenswrapper[4680]: I0217 18:40:47.399143 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2knjj\" (UniqueName: \"kubernetes.io/projected/9d30b3f0-4816-434f-8171-aa6597c5a958-kube-api-access-2knjj\") pod \"community-operators-kbqbg\" (UID: \"9d30b3f0-4816-434f-8171-aa6597c5a958\") " pod="openshift-marketplace/community-operators-kbqbg" Feb 17 18:40:47 crc kubenswrapper[4680]: I0217 18:40:47.506545 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kbqbg" Feb 17 18:40:48 crc kubenswrapper[4680]: I0217 18:40:48.136841 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kbqbg"] Feb 17 18:40:48 crc kubenswrapper[4680]: I0217 18:40:48.757444 4680 generic.go:334] "Generic (PLEG): container finished" podID="9d30b3f0-4816-434f-8171-aa6597c5a958" containerID="b335f200bf1ba8c6f2d4d589670348eb71ba32df3ca8688b7bd1fedc102a7d93" exitCode=0 Feb 17 18:40:48 crc kubenswrapper[4680]: I0217 18:40:48.757532 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kbqbg" event={"ID":"9d30b3f0-4816-434f-8171-aa6597c5a958","Type":"ContainerDied","Data":"b335f200bf1ba8c6f2d4d589670348eb71ba32df3ca8688b7bd1fedc102a7d93"} Feb 17 18:40:48 crc kubenswrapper[4680]: I0217 18:40:48.758720 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kbqbg" event={"ID":"9d30b3f0-4816-434f-8171-aa6597c5a958","Type":"ContainerStarted","Data":"d58e7e19d03bd5b33a846d6b4b15edddddd7dc72a8a921ea5c2fbc4ab9036a56"} Feb 17 18:40:48 crc kubenswrapper[4680]: I0217 18:40:48.760963 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 18:40:49 crc kubenswrapper[4680]: I0217 18:40:49.771114 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kbqbg" event={"ID":"9d30b3f0-4816-434f-8171-aa6597c5a958","Type":"ContainerStarted","Data":"85e38fb76a08fa9857ac699d15fc2c662712561ca8b5dc14f07beac5d344f7f5"} Feb 17 18:40:51 crc kubenswrapper[4680]: I0217 18:40:51.787951 4680 generic.go:334] "Generic (PLEG): container finished" podID="9d30b3f0-4816-434f-8171-aa6597c5a958" containerID="85e38fb76a08fa9857ac699d15fc2c662712561ca8b5dc14f07beac5d344f7f5" exitCode=0 Feb 17 18:40:51 crc kubenswrapper[4680]: I0217 18:40:51.788214 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kbqbg" event={"ID":"9d30b3f0-4816-434f-8171-aa6597c5a958","Type":"ContainerDied","Data":"85e38fb76a08fa9857ac699d15fc2c662712561ca8b5dc14f07beac5d344f7f5"} Feb 17 18:40:52 crc kubenswrapper[4680]: I0217 18:40:52.799774 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kbqbg" event={"ID":"9d30b3f0-4816-434f-8171-aa6597c5a958","Type":"ContainerStarted","Data":"3da3d616fbd25caf2c6fc1972d9834db7ba9774dab10f6a05f937b87f68b17b3"} Feb 17 18:40:57 crc kubenswrapper[4680]: I0217 18:40:57.507015 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kbqbg" Feb 17 18:40:57 crc kubenswrapper[4680]: I0217 18:40:57.508436 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kbqbg" Feb 17 18:40:57 crc kubenswrapper[4680]: I0217 18:40:57.558823 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kbqbg" Feb 17 18:40:57 crc kubenswrapper[4680]: I0217 18:40:57.585818 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kbqbg" podStartSLOduration=7.13494338 podStartE2EDuration="10.585795345s" podCreationTimestamp="2026-02-17 18:40:47 +0000 UTC" firstStartedPulling="2026-02-17 18:40:48.759267755 +0000 UTC m=+3378.310166434" lastFinishedPulling="2026-02-17 18:40:52.21011972 +0000 UTC m=+3381.761018399" observedRunningTime="2026-02-17 18:40:52.82402768 +0000 UTC m=+3382.374926359" watchObservedRunningTime="2026-02-17 18:40:57.585795345 +0000 UTC m=+3387.136694024" Feb 17 18:40:57 crc kubenswrapper[4680]: I0217 18:40:57.889541 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kbqbg" Feb 17 18:40:57 crc kubenswrapper[4680]: I0217 18:40:57.946835 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kbqbg"] Feb 17 18:40:59 crc kubenswrapper[4680]: I0217 18:40:59.853676 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kbqbg" podUID="9d30b3f0-4816-434f-8171-aa6597c5a958" containerName="registry-server" containerID="cri-o://3da3d616fbd25caf2c6fc1972d9834db7ba9774dab10f6a05f937b87f68b17b3" gracePeriod=2 Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.217097 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qz5w2"] Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.219542 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qz5w2" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.280299 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qz5w2"] Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.365923 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-utilities\") pod \"redhat-marketplace-qz5w2\" (UID: \"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f\") " pod="openshift-marketplace/redhat-marketplace-qz5w2" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.366004 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcts2\" (UniqueName: \"kubernetes.io/projected/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-kube-api-access-pcts2\") pod \"redhat-marketplace-qz5w2\" (UID: \"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f\") " pod="openshift-marketplace/redhat-marketplace-qz5w2" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.366057 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-catalog-content\") pod \"redhat-marketplace-qz5w2\" (UID: \"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f\") " pod="openshift-marketplace/redhat-marketplace-qz5w2" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.468122 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-utilities\") pod \"redhat-marketplace-qz5w2\" (UID: \"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f\") " pod="openshift-marketplace/redhat-marketplace-qz5w2" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.468403 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcts2\" (UniqueName: \"kubernetes.io/projected/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-kube-api-access-pcts2\") pod \"redhat-marketplace-qz5w2\" (UID: \"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f\") " pod="openshift-marketplace/redhat-marketplace-qz5w2" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.468521 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-catalog-content\") pod \"redhat-marketplace-qz5w2\" (UID: \"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f\") " pod="openshift-marketplace/redhat-marketplace-qz5w2" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.469087 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-catalog-content\") pod \"redhat-marketplace-qz5w2\" (UID: \"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f\") " pod="openshift-marketplace/redhat-marketplace-qz5w2" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.469133 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-utilities\") pod \"redhat-marketplace-qz5w2\" (UID: \"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f\") " pod="openshift-marketplace/redhat-marketplace-qz5w2" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.489058 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcts2\" (UniqueName: \"kubernetes.io/projected/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-kube-api-access-pcts2\") pod \"redhat-marketplace-qz5w2\" (UID: \"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f\") " pod="openshift-marketplace/redhat-marketplace-qz5w2" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.541759 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kbqbg" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.556051 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qz5w2" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.671368 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2knjj\" (UniqueName: \"kubernetes.io/projected/9d30b3f0-4816-434f-8171-aa6597c5a958-kube-api-access-2knjj\") pod \"9d30b3f0-4816-434f-8171-aa6597c5a958\" (UID: \"9d30b3f0-4816-434f-8171-aa6597c5a958\") " Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.671550 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d30b3f0-4816-434f-8171-aa6597c5a958-utilities\") pod \"9d30b3f0-4816-434f-8171-aa6597c5a958\" (UID: \"9d30b3f0-4816-434f-8171-aa6597c5a958\") " Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.671605 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d30b3f0-4816-434f-8171-aa6597c5a958-catalog-content\") pod \"9d30b3f0-4816-434f-8171-aa6597c5a958\" (UID: \"9d30b3f0-4816-434f-8171-aa6597c5a958\") " Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.672588 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d30b3f0-4816-434f-8171-aa6597c5a958-utilities" (OuterVolumeSpecName: "utilities") pod "9d30b3f0-4816-434f-8171-aa6597c5a958" (UID: "9d30b3f0-4816-434f-8171-aa6597c5a958"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.678057 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d30b3f0-4816-434f-8171-aa6597c5a958-kube-api-access-2knjj" (OuterVolumeSpecName: "kube-api-access-2knjj") pod "9d30b3f0-4816-434f-8171-aa6597c5a958" (UID: "9d30b3f0-4816-434f-8171-aa6597c5a958"). InnerVolumeSpecName "kube-api-access-2knjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.773467 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d30b3f0-4816-434f-8171-aa6597c5a958-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.773737 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2knjj\" (UniqueName: \"kubernetes.io/projected/9d30b3f0-4816-434f-8171-aa6597c5a958-kube-api-access-2knjj\") on node \"crc\" DevicePath \"\"" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.875438 4680 generic.go:334] "Generic (PLEG): container finished" podID="9d30b3f0-4816-434f-8171-aa6597c5a958" containerID="3da3d616fbd25caf2c6fc1972d9834db7ba9774dab10f6a05f937b87f68b17b3" exitCode=0 Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.875476 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kbqbg" event={"ID":"9d30b3f0-4816-434f-8171-aa6597c5a958","Type":"ContainerDied","Data":"3da3d616fbd25caf2c6fc1972d9834db7ba9774dab10f6a05f937b87f68b17b3"} Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.875502 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kbqbg" event={"ID":"9d30b3f0-4816-434f-8171-aa6597c5a958","Type":"ContainerDied","Data":"d58e7e19d03bd5b33a846d6b4b15edddddd7dc72a8a921ea5c2fbc4ab9036a56"} Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.875519 4680 scope.go:117] "RemoveContainer" containerID="3da3d616fbd25caf2c6fc1972d9834db7ba9774dab10f6a05f937b87f68b17b3" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.875635 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kbqbg" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.916035 4680 scope.go:117] "RemoveContainer" containerID="85e38fb76a08fa9857ac699d15fc2c662712561ca8b5dc14f07beac5d344f7f5" Feb 17 18:41:00 crc kubenswrapper[4680]: I0217 18:41:00.952161 4680 scope.go:117] "RemoveContainer" containerID="b335f200bf1ba8c6f2d4d589670348eb71ba32df3ca8688b7bd1fedc102a7d93" Feb 17 18:41:01 crc kubenswrapper[4680]: I0217 18:41:01.025637 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d30b3f0-4816-434f-8171-aa6597c5a958-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d30b3f0-4816-434f-8171-aa6597c5a958" (UID: "9d30b3f0-4816-434f-8171-aa6597c5a958"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:41:01 crc kubenswrapper[4680]: I0217 18:41:01.028621 4680 scope.go:117] "RemoveContainer" containerID="3da3d616fbd25caf2c6fc1972d9834db7ba9774dab10f6a05f937b87f68b17b3" Feb 17 18:41:01 crc kubenswrapper[4680]: E0217 18:41:01.032741 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3da3d616fbd25caf2c6fc1972d9834db7ba9774dab10f6a05f937b87f68b17b3\": container with ID starting with 3da3d616fbd25caf2c6fc1972d9834db7ba9774dab10f6a05f937b87f68b17b3 not found: ID does not exist" containerID="3da3d616fbd25caf2c6fc1972d9834db7ba9774dab10f6a05f937b87f68b17b3" Feb 17 18:41:01 crc kubenswrapper[4680]: I0217 18:41:01.032797 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3da3d616fbd25caf2c6fc1972d9834db7ba9774dab10f6a05f937b87f68b17b3"} err="failed to get container status \"3da3d616fbd25caf2c6fc1972d9834db7ba9774dab10f6a05f937b87f68b17b3\": rpc error: code = NotFound desc = could not find container \"3da3d616fbd25caf2c6fc1972d9834db7ba9774dab10f6a05f937b87f68b17b3\": container with ID starting with 3da3d616fbd25caf2c6fc1972d9834db7ba9774dab10f6a05f937b87f68b17b3 not found: ID does not exist" Feb 17 18:41:01 crc kubenswrapper[4680]: I0217 18:41:01.032823 4680 scope.go:117] "RemoveContainer" containerID="85e38fb76a08fa9857ac699d15fc2c662712561ca8b5dc14f07beac5d344f7f5" Feb 17 18:41:01 crc kubenswrapper[4680]: E0217 18:41:01.033554 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85e38fb76a08fa9857ac699d15fc2c662712561ca8b5dc14f07beac5d344f7f5\": container with ID starting with 85e38fb76a08fa9857ac699d15fc2c662712561ca8b5dc14f07beac5d344f7f5 not found: ID does not exist" containerID="85e38fb76a08fa9857ac699d15fc2c662712561ca8b5dc14f07beac5d344f7f5" Feb 17 18:41:01 crc kubenswrapper[4680]: I0217 18:41:01.033573 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85e38fb76a08fa9857ac699d15fc2c662712561ca8b5dc14f07beac5d344f7f5"} err="failed to get container status \"85e38fb76a08fa9857ac699d15fc2c662712561ca8b5dc14f07beac5d344f7f5\": rpc error: code = NotFound desc = could not find container \"85e38fb76a08fa9857ac699d15fc2c662712561ca8b5dc14f07beac5d344f7f5\": container with ID starting with 85e38fb76a08fa9857ac699d15fc2c662712561ca8b5dc14f07beac5d344f7f5 not found: ID does not exist" Feb 17 18:41:01 crc kubenswrapper[4680]: I0217 18:41:01.033590 4680 scope.go:117] "RemoveContainer" containerID="b335f200bf1ba8c6f2d4d589670348eb71ba32df3ca8688b7bd1fedc102a7d93" Feb 17 18:41:01 crc kubenswrapper[4680]: E0217 18:41:01.033821 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b335f200bf1ba8c6f2d4d589670348eb71ba32df3ca8688b7bd1fedc102a7d93\": container with ID starting with b335f200bf1ba8c6f2d4d589670348eb71ba32df3ca8688b7bd1fedc102a7d93 not found: ID does not exist" containerID="b335f200bf1ba8c6f2d4d589670348eb71ba32df3ca8688b7bd1fedc102a7d93" Feb 17 18:41:01 crc kubenswrapper[4680]: I0217 18:41:01.033836 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b335f200bf1ba8c6f2d4d589670348eb71ba32df3ca8688b7bd1fedc102a7d93"} err="failed to get container status \"b335f200bf1ba8c6f2d4d589670348eb71ba32df3ca8688b7bd1fedc102a7d93\": rpc error: code = NotFound desc = could not find container \"b335f200bf1ba8c6f2d4d589670348eb71ba32df3ca8688b7bd1fedc102a7d93\": container with ID starting with b335f200bf1ba8c6f2d4d589670348eb71ba32df3ca8688b7bd1fedc102a7d93 not found: ID does not exist" Feb 17 18:41:01 crc kubenswrapper[4680]: I0217 18:41:01.081553 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d30b3f0-4816-434f-8171-aa6597c5a958-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:41:01 crc kubenswrapper[4680]: I0217 18:41:01.196647 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kbqbg"] Feb 17 18:41:01 crc kubenswrapper[4680]: I0217 18:41:01.207425 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kbqbg"] Feb 17 18:41:01 crc kubenswrapper[4680]: I0217 18:41:01.234204 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qz5w2"] Feb 17 18:41:01 crc kubenswrapper[4680]: I0217 18:41:01.886656 4680 generic.go:334] "Generic (PLEG): container finished" podID="4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f" containerID="75c0ac3def3064a7b8602251535cef55af725dec8da048a2985ef274bc435b0a" exitCode=0 Feb 17 18:41:01 crc kubenswrapper[4680]: I0217 18:41:01.886704 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz5w2" event={"ID":"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f","Type":"ContainerDied","Data":"75c0ac3def3064a7b8602251535cef55af725dec8da048a2985ef274bc435b0a"} Feb 17 18:41:01 crc kubenswrapper[4680]: I0217 18:41:01.886923 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz5w2" event={"ID":"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f","Type":"ContainerStarted","Data":"da2cad3b443b6d03beaa4a358294f3e8cb42430143a6435e6c0b9a1807ce6db3"} Feb 17 18:41:02 crc kubenswrapper[4680]: I0217 18:41:02.897187 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz5w2" event={"ID":"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f","Type":"ContainerStarted","Data":"c0a70f68f77b64cfdc8f61a95eeb3bd81d1ee93a3e7a5604c2f112a21503a84c"} Feb 17 18:41:03 crc kubenswrapper[4680]: I0217 18:41:03.114595 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d30b3f0-4816-434f-8171-aa6597c5a958" path="/var/lib/kubelet/pods/9d30b3f0-4816-434f-8171-aa6597c5a958/volumes" Feb 17 18:41:03 crc kubenswrapper[4680]: I0217 18:41:03.908527 4680 generic.go:334] "Generic (PLEG): container finished" podID="4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f" containerID="c0a70f68f77b64cfdc8f61a95eeb3bd81d1ee93a3e7a5604c2f112a21503a84c" exitCode=0 Feb 17 18:41:03 crc kubenswrapper[4680]: I0217 18:41:03.908568 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz5w2" event={"ID":"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f","Type":"ContainerDied","Data":"c0a70f68f77b64cfdc8f61a95eeb3bd81d1ee93a3e7a5604c2f112a21503a84c"} Feb 17 18:41:04 crc kubenswrapper[4680]: I0217 18:41:04.917899 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz5w2" event={"ID":"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f","Type":"ContainerStarted","Data":"ce4781358cb6852acc901ad2b4906810cdec84de674fb51fb525e0dd37cc456e"} Feb 17 18:41:04 crc kubenswrapper[4680]: I0217 18:41:04.939912 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qz5w2" podStartSLOduration=2.496669818 podStartE2EDuration="4.939894407s" podCreationTimestamp="2026-02-17 18:41:00 +0000 UTC" firstStartedPulling="2026-02-17 18:41:01.888396005 +0000 UTC m=+3391.439294684" lastFinishedPulling="2026-02-17 18:41:04.331620574 +0000 UTC m=+3393.882519273" observedRunningTime="2026-02-17 18:41:04.93616427 +0000 UTC m=+3394.487062949" watchObservedRunningTime="2026-02-17 18:41:04.939894407 +0000 UTC m=+3394.490793086" Feb 17 18:41:05 crc kubenswrapper[4680]: I0217 18:41:05.588530 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:41:05 crc kubenswrapper[4680]: I0217 18:41:05.588596 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:41:05 crc kubenswrapper[4680]: I0217 18:41:05.588646 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 18:41:05 crc kubenswrapper[4680]: I0217 18:41:05.589492 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b02e6d4c7f88ab2b2c986289df7f8665a41abc86c42a850f0220d3636dfaedbf"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 18:41:05 crc kubenswrapper[4680]: I0217 18:41:05.589567 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://b02e6d4c7f88ab2b2c986289df7f8665a41abc86c42a850f0220d3636dfaedbf" gracePeriod=600 Feb 17 18:41:05 crc kubenswrapper[4680]: I0217 18:41:05.929859 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="b02e6d4c7f88ab2b2c986289df7f8665a41abc86c42a850f0220d3636dfaedbf" exitCode=0 Feb 17 18:41:05 crc kubenswrapper[4680]: I0217 18:41:05.929931 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"b02e6d4c7f88ab2b2c986289df7f8665a41abc86c42a850f0220d3636dfaedbf"} Feb 17 18:41:05 crc kubenswrapper[4680]: I0217 18:41:05.930254 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560"} Feb 17 18:41:05 crc kubenswrapper[4680]: I0217 18:41:05.930287 4680 scope.go:117] "RemoveContainer" containerID="fb46451ff71571aa4f78ba430701c1a2a0d90a6e4e3c601e4b6131f99f241262" Feb 17 18:41:10 crc kubenswrapper[4680]: I0217 18:41:10.556430 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qz5w2" Feb 17 18:41:10 crc kubenswrapper[4680]: I0217 18:41:10.557252 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qz5w2" Feb 17 18:41:10 crc kubenswrapper[4680]: I0217 18:41:10.598978 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qz5w2" Feb 17 18:41:11 crc kubenswrapper[4680]: I0217 18:41:11.035054 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qz5w2" Feb 17 18:41:11 crc kubenswrapper[4680]: I0217 18:41:11.094106 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qz5w2"] Feb 17 18:41:12 crc kubenswrapper[4680]: I0217 18:41:12.991617 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qz5w2" podUID="4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f" containerName="registry-server" containerID="cri-o://ce4781358cb6852acc901ad2b4906810cdec84de674fb51fb525e0dd37cc456e" gracePeriod=2 Feb 17 18:41:14 crc kubenswrapper[4680]: I0217 18:41:14.001385 4680 generic.go:334] "Generic (PLEG): container finished" podID="4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f" containerID="ce4781358cb6852acc901ad2b4906810cdec84de674fb51fb525e0dd37cc456e" exitCode=0 Feb 17 18:41:14 crc kubenswrapper[4680]: I0217 18:41:14.001672 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz5w2" event={"ID":"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f","Type":"ContainerDied","Data":"ce4781358cb6852acc901ad2b4906810cdec84de674fb51fb525e0dd37cc456e"} Feb 17 18:41:14 crc kubenswrapper[4680]: I0217 18:41:14.001697 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz5w2" event={"ID":"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f","Type":"ContainerDied","Data":"da2cad3b443b6d03beaa4a358294f3e8cb42430143a6435e6c0b9a1807ce6db3"} Feb 17 18:41:14 crc kubenswrapper[4680]: I0217 18:41:14.001707 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da2cad3b443b6d03beaa4a358294f3e8cb42430143a6435e6c0b9a1807ce6db3" Feb 17 18:41:14 crc kubenswrapper[4680]: I0217 18:41:14.030390 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qz5w2" Feb 17 18:41:14 crc kubenswrapper[4680]: I0217 18:41:14.139867 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-catalog-content\") pod \"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f\" (UID: \"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f\") " Feb 17 18:41:14 crc kubenswrapper[4680]: I0217 18:41:14.139912 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-utilities\") pod \"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f\" (UID: \"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f\") " Feb 17 18:41:14 crc kubenswrapper[4680]: I0217 18:41:14.139931 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcts2\" (UniqueName: \"kubernetes.io/projected/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-kube-api-access-pcts2\") pod \"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f\" (UID: \"4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f\") " Feb 17 18:41:14 crc kubenswrapper[4680]: I0217 18:41:14.141034 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-utilities" (OuterVolumeSpecName: "utilities") pod "4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f" (UID: "4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:41:14 crc kubenswrapper[4680]: I0217 18:41:14.145426 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-kube-api-access-pcts2" (OuterVolumeSpecName: "kube-api-access-pcts2") pod "4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f" (UID: "4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f"). InnerVolumeSpecName "kube-api-access-pcts2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:41:14 crc kubenswrapper[4680]: I0217 18:41:14.173775 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f" (UID: "4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:41:14 crc kubenswrapper[4680]: I0217 18:41:14.242381 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:41:14 crc kubenswrapper[4680]: I0217 18:41:14.242429 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:41:14 crc kubenswrapper[4680]: I0217 18:41:14.242438 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcts2\" (UniqueName: \"kubernetes.io/projected/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f-kube-api-access-pcts2\") on node \"crc\" DevicePath \"\"" Feb 17 18:41:15 crc kubenswrapper[4680]: I0217 18:41:15.009532 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qz5w2" Feb 17 18:41:15 crc kubenswrapper[4680]: I0217 18:41:15.039560 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qz5w2"] Feb 17 18:41:15 crc kubenswrapper[4680]: I0217 18:41:15.047444 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qz5w2"] Feb 17 18:41:15 crc kubenswrapper[4680]: I0217 18:41:15.117878 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f" path="/var/lib/kubelet/pods/4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f/volumes" Feb 17 18:42:53 crc kubenswrapper[4680]: I0217 18:42:53.846532 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c8v8v"] Feb 17 18:42:53 crc kubenswrapper[4680]: E0217 18:42:53.848256 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d30b3f0-4816-434f-8171-aa6597c5a958" containerName="registry-server" Feb 17 18:42:53 crc kubenswrapper[4680]: I0217 18:42:53.848422 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d30b3f0-4816-434f-8171-aa6597c5a958" containerName="registry-server" Feb 17 18:42:53 crc kubenswrapper[4680]: E0217 18:42:53.848501 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f" containerName="extract-content" Feb 17 18:42:53 crc kubenswrapper[4680]: I0217 18:42:53.848559 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f" containerName="extract-content" Feb 17 18:42:53 crc kubenswrapper[4680]: E0217 18:42:53.848626 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f" containerName="registry-server" Feb 17 18:42:53 crc kubenswrapper[4680]: I0217 18:42:53.848682 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f" containerName="registry-server" Feb 17 18:42:53 crc kubenswrapper[4680]: E0217 18:42:53.848744 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d30b3f0-4816-434f-8171-aa6597c5a958" containerName="extract-content" Feb 17 18:42:53 crc kubenswrapper[4680]: I0217 18:42:53.848811 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d30b3f0-4816-434f-8171-aa6597c5a958" containerName="extract-content" Feb 17 18:42:53 crc kubenswrapper[4680]: E0217 18:42:53.848906 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d30b3f0-4816-434f-8171-aa6597c5a958" containerName="extract-utilities" Feb 17 18:42:53 crc kubenswrapper[4680]: I0217 18:42:53.848980 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d30b3f0-4816-434f-8171-aa6597c5a958" containerName="extract-utilities" Feb 17 18:42:53 crc kubenswrapper[4680]: E0217 18:42:53.849050 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f" containerName="extract-utilities" Feb 17 18:42:53 crc kubenswrapper[4680]: I0217 18:42:53.849111 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f" containerName="extract-utilities" Feb 17 18:42:53 crc kubenswrapper[4680]: I0217 18:42:53.849360 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d30b3f0-4816-434f-8171-aa6597c5a958" containerName="registry-server" Feb 17 18:42:53 crc kubenswrapper[4680]: I0217 18:42:53.849449 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae6b8e1-ebca-46cc-ac84-0c71e62ef05f" containerName="registry-server" Feb 17 18:42:53 crc kubenswrapper[4680]: I0217 18:42:53.850791 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c8v8v" Feb 17 18:42:53 crc kubenswrapper[4680]: I0217 18:42:53.873679 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c8v8v"] Feb 17 18:42:53 crc kubenswrapper[4680]: I0217 18:42:53.973982 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf84c64-15ed-4135-8f2a-bfa4a69c0191-catalog-content\") pod \"redhat-operators-c8v8v\" (UID: \"faf84c64-15ed-4135-8f2a-bfa4a69c0191\") " pod="openshift-marketplace/redhat-operators-c8v8v" Feb 17 18:42:53 crc kubenswrapper[4680]: I0217 18:42:53.974035 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cgt8\" (UniqueName: \"kubernetes.io/projected/faf84c64-15ed-4135-8f2a-bfa4a69c0191-kube-api-access-4cgt8\") pod \"redhat-operators-c8v8v\" (UID: \"faf84c64-15ed-4135-8f2a-bfa4a69c0191\") " pod="openshift-marketplace/redhat-operators-c8v8v" Feb 17 18:42:53 crc kubenswrapper[4680]: I0217 18:42:53.974181 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf84c64-15ed-4135-8f2a-bfa4a69c0191-utilities\") pod \"redhat-operators-c8v8v\" (UID: \"faf84c64-15ed-4135-8f2a-bfa4a69c0191\") " pod="openshift-marketplace/redhat-operators-c8v8v" Feb 17 18:42:54 crc kubenswrapper[4680]: I0217 18:42:54.075547 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf84c64-15ed-4135-8f2a-bfa4a69c0191-utilities\") pod \"redhat-operators-c8v8v\" (UID: \"faf84c64-15ed-4135-8f2a-bfa4a69c0191\") " pod="openshift-marketplace/redhat-operators-c8v8v" Feb 17 18:42:54 crc kubenswrapper[4680]: I0217 18:42:54.075890 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf84c64-15ed-4135-8f2a-bfa4a69c0191-catalog-content\") pod \"redhat-operators-c8v8v\" (UID: \"faf84c64-15ed-4135-8f2a-bfa4a69c0191\") " pod="openshift-marketplace/redhat-operators-c8v8v" Feb 17 18:42:54 crc kubenswrapper[4680]: I0217 18:42:54.076023 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cgt8\" (UniqueName: \"kubernetes.io/projected/faf84c64-15ed-4135-8f2a-bfa4a69c0191-kube-api-access-4cgt8\") pod \"redhat-operators-c8v8v\" (UID: \"faf84c64-15ed-4135-8f2a-bfa4a69c0191\") " pod="openshift-marketplace/redhat-operators-c8v8v" Feb 17 18:42:54 crc kubenswrapper[4680]: I0217 18:42:54.076588 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf84c64-15ed-4135-8f2a-bfa4a69c0191-utilities\") pod \"redhat-operators-c8v8v\" (UID: \"faf84c64-15ed-4135-8f2a-bfa4a69c0191\") " pod="openshift-marketplace/redhat-operators-c8v8v" Feb 17 18:42:54 crc kubenswrapper[4680]: I0217 18:42:54.076865 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf84c64-15ed-4135-8f2a-bfa4a69c0191-catalog-content\") pod \"redhat-operators-c8v8v\" (UID: \"faf84c64-15ed-4135-8f2a-bfa4a69c0191\") " pod="openshift-marketplace/redhat-operators-c8v8v" Feb 17 18:42:54 crc kubenswrapper[4680]: I0217 18:42:54.113087 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cgt8\" (UniqueName: \"kubernetes.io/projected/faf84c64-15ed-4135-8f2a-bfa4a69c0191-kube-api-access-4cgt8\") pod \"redhat-operators-c8v8v\" (UID: \"faf84c64-15ed-4135-8f2a-bfa4a69c0191\") " pod="openshift-marketplace/redhat-operators-c8v8v" Feb 17 18:42:54 crc kubenswrapper[4680]: I0217 18:42:54.169388 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c8v8v" Feb 17 18:42:54 crc kubenswrapper[4680]: I0217 18:42:54.714066 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c8v8v"] Feb 17 18:42:54 crc kubenswrapper[4680]: I0217 18:42:54.840634 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8v8v" event={"ID":"faf84c64-15ed-4135-8f2a-bfa4a69c0191","Type":"ContainerStarted","Data":"17f25e7334189e7274f4e1b75086ecd5bc1b71c1d322d90d4ccd6f7e03ee6eb5"} Feb 17 18:42:55 crc kubenswrapper[4680]: I0217 18:42:55.850839 4680 generic.go:334] "Generic (PLEG): container finished" podID="faf84c64-15ed-4135-8f2a-bfa4a69c0191" containerID="2283231a9f7a06f0dd08c28d615463685c3a52fe33851d00c313b4845d6c4fed" exitCode=0 Feb 17 18:42:55 crc kubenswrapper[4680]: I0217 18:42:55.851151 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8v8v" event={"ID":"faf84c64-15ed-4135-8f2a-bfa4a69c0191","Type":"ContainerDied","Data":"2283231a9f7a06f0dd08c28d615463685c3a52fe33851d00c313b4845d6c4fed"} Feb 17 18:42:56 crc kubenswrapper[4680]: I0217 18:42:56.898191 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8v8v" event={"ID":"faf84c64-15ed-4135-8f2a-bfa4a69c0191","Type":"ContainerStarted","Data":"311e8609b42925a5ca8d909d1382aa2006930cc695c4c30ecdd96a3bfb614ac7"} Feb 17 18:43:01 crc kubenswrapper[4680]: I0217 18:43:01.948792 4680 generic.go:334] "Generic (PLEG): container finished" podID="faf84c64-15ed-4135-8f2a-bfa4a69c0191" containerID="311e8609b42925a5ca8d909d1382aa2006930cc695c4c30ecdd96a3bfb614ac7" exitCode=0 Feb 17 18:43:01 crc kubenswrapper[4680]: I0217 18:43:01.948882 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8v8v" event={"ID":"faf84c64-15ed-4135-8f2a-bfa4a69c0191","Type":"ContainerDied","Data":"311e8609b42925a5ca8d909d1382aa2006930cc695c4c30ecdd96a3bfb614ac7"} Feb 17 18:43:02 crc kubenswrapper[4680]: I0217 18:43:02.961126 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8v8v" event={"ID":"faf84c64-15ed-4135-8f2a-bfa4a69c0191","Type":"ContainerStarted","Data":"06bd0322d4474ea7f53f1020820cc9584568d7de5679332c5739fd33420db440"} Feb 17 18:43:02 crc kubenswrapper[4680]: I0217 18:43:02.980981 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c8v8v" podStartSLOduration=3.430342227 podStartE2EDuration="9.980961235s" podCreationTimestamp="2026-02-17 18:42:53 +0000 UTC" firstStartedPulling="2026-02-17 18:42:55.853716485 +0000 UTC m=+3505.404615164" lastFinishedPulling="2026-02-17 18:43:02.404335493 +0000 UTC m=+3511.955234172" observedRunningTime="2026-02-17 18:43:02.976458693 +0000 UTC m=+3512.527357392" watchObservedRunningTime="2026-02-17 18:43:02.980961235 +0000 UTC m=+3512.531859914" Feb 17 18:43:04 crc kubenswrapper[4680]: I0217 18:43:04.170231 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c8v8v" Feb 17 18:43:04 crc kubenswrapper[4680]: I0217 18:43:04.170961 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c8v8v" Feb 17 18:43:05 crc kubenswrapper[4680]: I0217 18:43:05.230910 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c8v8v" podUID="faf84c64-15ed-4135-8f2a-bfa4a69c0191" containerName="registry-server" probeResult="failure" output=< Feb 17 18:43:05 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 18:43:05 crc kubenswrapper[4680]: > Feb 17 18:43:05 crc kubenswrapper[4680]: I0217 18:43:05.589140 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:43:05 crc kubenswrapper[4680]: I0217 18:43:05.589195 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:43:14 crc kubenswrapper[4680]: I0217 18:43:14.223194 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c8v8v" Feb 17 18:43:14 crc kubenswrapper[4680]: I0217 18:43:14.289474 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c8v8v" Feb 17 18:43:14 crc kubenswrapper[4680]: I0217 18:43:14.475082 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c8v8v"] Feb 17 18:43:16 crc kubenswrapper[4680]: I0217 18:43:16.069168 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c8v8v" podUID="faf84c64-15ed-4135-8f2a-bfa4a69c0191" containerName="registry-server" containerID="cri-o://06bd0322d4474ea7f53f1020820cc9584568d7de5679332c5739fd33420db440" gracePeriod=2 Feb 17 18:43:16 crc kubenswrapper[4680]: I0217 18:43:16.626715 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c8v8v" Feb 17 18:43:16 crc kubenswrapper[4680]: I0217 18:43:16.742537 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cgt8\" (UniqueName: \"kubernetes.io/projected/faf84c64-15ed-4135-8f2a-bfa4a69c0191-kube-api-access-4cgt8\") pod \"faf84c64-15ed-4135-8f2a-bfa4a69c0191\" (UID: \"faf84c64-15ed-4135-8f2a-bfa4a69c0191\") " Feb 17 18:43:16 crc kubenswrapper[4680]: I0217 18:43:16.742603 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf84c64-15ed-4135-8f2a-bfa4a69c0191-catalog-content\") pod \"faf84c64-15ed-4135-8f2a-bfa4a69c0191\" (UID: \"faf84c64-15ed-4135-8f2a-bfa4a69c0191\") " Feb 17 18:43:16 crc kubenswrapper[4680]: I0217 18:43:16.742648 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf84c64-15ed-4135-8f2a-bfa4a69c0191-utilities\") pod \"faf84c64-15ed-4135-8f2a-bfa4a69c0191\" (UID: \"faf84c64-15ed-4135-8f2a-bfa4a69c0191\") " Feb 17 18:43:16 crc kubenswrapper[4680]: I0217 18:43:16.743710 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faf84c64-15ed-4135-8f2a-bfa4a69c0191-utilities" (OuterVolumeSpecName: "utilities") pod "faf84c64-15ed-4135-8f2a-bfa4a69c0191" (UID: "faf84c64-15ed-4135-8f2a-bfa4a69c0191"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:43:16 crc kubenswrapper[4680]: I0217 18:43:16.752379 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf84c64-15ed-4135-8f2a-bfa4a69c0191-kube-api-access-4cgt8" (OuterVolumeSpecName: "kube-api-access-4cgt8") pod "faf84c64-15ed-4135-8f2a-bfa4a69c0191" (UID: "faf84c64-15ed-4135-8f2a-bfa4a69c0191"). InnerVolumeSpecName "kube-api-access-4cgt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:43:16 crc kubenswrapper[4680]: I0217 18:43:16.856815 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cgt8\" (UniqueName: \"kubernetes.io/projected/faf84c64-15ed-4135-8f2a-bfa4a69c0191-kube-api-access-4cgt8\") on node \"crc\" DevicePath \"\"" Feb 17 18:43:16 crc kubenswrapper[4680]: I0217 18:43:16.856852 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf84c64-15ed-4135-8f2a-bfa4a69c0191-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:43:16 crc kubenswrapper[4680]: I0217 18:43:16.886783 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faf84c64-15ed-4135-8f2a-bfa4a69c0191-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "faf84c64-15ed-4135-8f2a-bfa4a69c0191" (UID: "faf84c64-15ed-4135-8f2a-bfa4a69c0191"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:43:16 crc kubenswrapper[4680]: I0217 18:43:16.958818 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf84c64-15ed-4135-8f2a-bfa4a69c0191-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:43:17 crc kubenswrapper[4680]: I0217 18:43:17.077720 4680 generic.go:334] "Generic (PLEG): container finished" podID="faf84c64-15ed-4135-8f2a-bfa4a69c0191" containerID="06bd0322d4474ea7f53f1020820cc9584568d7de5679332c5739fd33420db440" exitCode=0 Feb 17 18:43:17 crc kubenswrapper[4680]: I0217 18:43:17.077791 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c8v8v" Feb 17 18:43:17 crc kubenswrapper[4680]: I0217 18:43:17.078840 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8v8v" event={"ID":"faf84c64-15ed-4135-8f2a-bfa4a69c0191","Type":"ContainerDied","Data":"06bd0322d4474ea7f53f1020820cc9584568d7de5679332c5739fd33420db440"} Feb 17 18:43:17 crc kubenswrapper[4680]: I0217 18:43:17.078960 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8v8v" event={"ID":"faf84c64-15ed-4135-8f2a-bfa4a69c0191","Type":"ContainerDied","Data":"17f25e7334189e7274f4e1b75086ecd5bc1b71c1d322d90d4ccd6f7e03ee6eb5"} Feb 17 18:43:17 crc kubenswrapper[4680]: I0217 18:43:17.079028 4680 scope.go:117] "RemoveContainer" containerID="06bd0322d4474ea7f53f1020820cc9584568d7de5679332c5739fd33420db440" Feb 17 18:43:17 crc kubenswrapper[4680]: I0217 18:43:17.115631 4680 scope.go:117] "RemoveContainer" containerID="311e8609b42925a5ca8d909d1382aa2006930cc695c4c30ecdd96a3bfb614ac7" Feb 17 18:43:17 crc kubenswrapper[4680]: I0217 18:43:17.130065 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c8v8v"] Feb 17 18:43:17 crc kubenswrapper[4680]: I0217 18:43:17.130124 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c8v8v"] Feb 17 18:43:17 crc kubenswrapper[4680]: I0217 18:43:17.144065 4680 scope.go:117] "RemoveContainer" containerID="2283231a9f7a06f0dd08c28d615463685c3a52fe33851d00c313b4845d6c4fed" Feb 17 18:43:17 crc kubenswrapper[4680]: I0217 18:43:17.184270 4680 scope.go:117] "RemoveContainer" containerID="06bd0322d4474ea7f53f1020820cc9584568d7de5679332c5739fd33420db440" Feb 17 18:43:17 crc kubenswrapper[4680]: E0217 18:43:17.184812 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06bd0322d4474ea7f53f1020820cc9584568d7de5679332c5739fd33420db440\": container with ID starting with 06bd0322d4474ea7f53f1020820cc9584568d7de5679332c5739fd33420db440 not found: ID does not exist" containerID="06bd0322d4474ea7f53f1020820cc9584568d7de5679332c5739fd33420db440" Feb 17 18:43:17 crc kubenswrapper[4680]: I0217 18:43:17.184869 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06bd0322d4474ea7f53f1020820cc9584568d7de5679332c5739fd33420db440"} err="failed to get container status \"06bd0322d4474ea7f53f1020820cc9584568d7de5679332c5739fd33420db440\": rpc error: code = NotFound desc = could not find container \"06bd0322d4474ea7f53f1020820cc9584568d7de5679332c5739fd33420db440\": container with ID starting with 06bd0322d4474ea7f53f1020820cc9584568d7de5679332c5739fd33420db440 not found: ID does not exist" Feb 17 18:43:17 crc kubenswrapper[4680]: I0217 18:43:17.184901 4680 scope.go:117] "RemoveContainer" containerID="311e8609b42925a5ca8d909d1382aa2006930cc695c4c30ecdd96a3bfb614ac7" Feb 17 18:43:17 crc kubenswrapper[4680]: E0217 18:43:17.190972 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"311e8609b42925a5ca8d909d1382aa2006930cc695c4c30ecdd96a3bfb614ac7\": container with ID starting with 311e8609b42925a5ca8d909d1382aa2006930cc695c4c30ecdd96a3bfb614ac7 not found: ID does not exist" containerID="311e8609b42925a5ca8d909d1382aa2006930cc695c4c30ecdd96a3bfb614ac7" Feb 17 18:43:17 crc kubenswrapper[4680]: I0217 18:43:17.191043 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"311e8609b42925a5ca8d909d1382aa2006930cc695c4c30ecdd96a3bfb614ac7"} err="failed to get container status \"311e8609b42925a5ca8d909d1382aa2006930cc695c4c30ecdd96a3bfb614ac7\": rpc error: code = NotFound desc = could not find container \"311e8609b42925a5ca8d909d1382aa2006930cc695c4c30ecdd96a3bfb614ac7\": container with ID starting with 311e8609b42925a5ca8d909d1382aa2006930cc695c4c30ecdd96a3bfb614ac7 not found: ID does not exist" Feb 17 18:43:17 crc kubenswrapper[4680]: I0217 18:43:17.191075 4680 scope.go:117] "RemoveContainer" containerID="2283231a9f7a06f0dd08c28d615463685c3a52fe33851d00c313b4845d6c4fed" Feb 17 18:43:17 crc kubenswrapper[4680]: E0217 18:43:17.191479 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2283231a9f7a06f0dd08c28d615463685c3a52fe33851d00c313b4845d6c4fed\": container with ID starting with 2283231a9f7a06f0dd08c28d615463685c3a52fe33851d00c313b4845d6c4fed not found: ID does not exist" containerID="2283231a9f7a06f0dd08c28d615463685c3a52fe33851d00c313b4845d6c4fed" Feb 17 18:43:17 crc kubenswrapper[4680]: I0217 18:43:17.191503 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2283231a9f7a06f0dd08c28d615463685c3a52fe33851d00c313b4845d6c4fed"} err="failed to get container status \"2283231a9f7a06f0dd08c28d615463685c3a52fe33851d00c313b4845d6c4fed\": rpc error: code = NotFound desc = could not find container \"2283231a9f7a06f0dd08c28d615463685c3a52fe33851d00c313b4845d6c4fed\": container with ID starting with 2283231a9f7a06f0dd08c28d615463685c3a52fe33851d00c313b4845d6c4fed not found: ID does not exist" Feb 17 18:43:19 crc kubenswrapper[4680]: I0217 18:43:19.116276 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faf84c64-15ed-4135-8f2a-bfa4a69c0191" path="/var/lib/kubelet/pods/faf84c64-15ed-4135-8f2a-bfa4a69c0191/volumes" Feb 17 18:43:35 crc kubenswrapper[4680]: I0217 18:43:35.588691 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:43:35 crc kubenswrapper[4680]: I0217 18:43:35.589351 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:44:05 crc kubenswrapper[4680]: I0217 18:44:05.588939 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:44:05 crc kubenswrapper[4680]: I0217 18:44:05.589507 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:44:05 crc kubenswrapper[4680]: I0217 18:44:05.589550 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 18:44:05 crc kubenswrapper[4680]: I0217 18:44:05.590289 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 18:44:05 crc kubenswrapper[4680]: I0217 18:44:05.590366 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" gracePeriod=600 Feb 17 18:44:05 crc kubenswrapper[4680]: E0217 18:44:05.720587 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:44:06 crc kubenswrapper[4680]: I0217 18:44:06.486061 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" exitCode=0 Feb 17 18:44:06 crc kubenswrapper[4680]: I0217 18:44:06.486112 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560"} Feb 17 18:44:06 crc kubenswrapper[4680]: I0217 18:44:06.486155 4680 scope.go:117] "RemoveContainer" containerID="b02e6d4c7f88ab2b2c986289df7f8665a41abc86c42a850f0220d3636dfaedbf" Feb 17 18:44:06 crc kubenswrapper[4680]: I0217 18:44:06.487869 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:44:06 crc kubenswrapper[4680]: E0217 18:44:06.488298 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:44:20 crc kubenswrapper[4680]: I0217 18:44:20.105583 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:44:20 crc kubenswrapper[4680]: E0217 18:44:20.106209 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:44:33 crc kubenswrapper[4680]: I0217 18:44:33.105351 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:44:33 crc kubenswrapper[4680]: E0217 18:44:33.106196 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:44:45 crc kubenswrapper[4680]: I0217 18:44:45.105763 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:44:45 crc kubenswrapper[4680]: E0217 18:44:45.106600 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:44:57 crc kubenswrapper[4680]: I0217 18:44:57.105675 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:44:57 crc kubenswrapper[4680]: E0217 18:44:57.106349 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.181018 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6"] Feb 17 18:45:00 crc kubenswrapper[4680]: E0217 18:45:00.183835 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf84c64-15ed-4135-8f2a-bfa4a69c0191" containerName="extract-utilities" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.183883 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf84c64-15ed-4135-8f2a-bfa4a69c0191" containerName="extract-utilities" Feb 17 18:45:00 crc kubenswrapper[4680]: E0217 18:45:00.183907 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf84c64-15ed-4135-8f2a-bfa4a69c0191" containerName="extract-content" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.183914 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf84c64-15ed-4135-8f2a-bfa4a69c0191" containerName="extract-content" Feb 17 18:45:00 crc kubenswrapper[4680]: E0217 18:45:00.183924 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf84c64-15ed-4135-8f2a-bfa4a69c0191" containerName="registry-server" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.183930 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf84c64-15ed-4135-8f2a-bfa4a69c0191" containerName="registry-server" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.184193 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf84c64-15ed-4135-8f2a-bfa4a69c0191" containerName="registry-server" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.185126 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.191376 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.201520 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6"] Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.208894 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.273888 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4522453-a73a-4176-b662-e75d9b3e1383-secret-volume\") pod \"collect-profiles-29522565-mwcp6\" (UID: \"a4522453-a73a-4176-b662-e75d9b3e1383\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.273944 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg7z6\" (UniqueName: \"kubernetes.io/projected/a4522453-a73a-4176-b662-e75d9b3e1383-kube-api-access-mg7z6\") pod \"collect-profiles-29522565-mwcp6\" (UID: \"a4522453-a73a-4176-b662-e75d9b3e1383\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.274036 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4522453-a73a-4176-b662-e75d9b3e1383-config-volume\") pod \"collect-profiles-29522565-mwcp6\" (UID: \"a4522453-a73a-4176-b662-e75d9b3e1383\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.376447 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4522453-a73a-4176-b662-e75d9b3e1383-secret-volume\") pod \"collect-profiles-29522565-mwcp6\" (UID: \"a4522453-a73a-4176-b662-e75d9b3e1383\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.376504 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg7z6\" (UniqueName: \"kubernetes.io/projected/a4522453-a73a-4176-b662-e75d9b3e1383-kube-api-access-mg7z6\") pod \"collect-profiles-29522565-mwcp6\" (UID: \"a4522453-a73a-4176-b662-e75d9b3e1383\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.376525 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4522453-a73a-4176-b662-e75d9b3e1383-config-volume\") pod \"collect-profiles-29522565-mwcp6\" (UID: \"a4522453-a73a-4176-b662-e75d9b3e1383\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.377920 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4522453-a73a-4176-b662-e75d9b3e1383-config-volume\") pod \"collect-profiles-29522565-mwcp6\" (UID: \"a4522453-a73a-4176-b662-e75d9b3e1383\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.385334 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4522453-a73a-4176-b662-e75d9b3e1383-secret-volume\") pod \"collect-profiles-29522565-mwcp6\" (UID: \"a4522453-a73a-4176-b662-e75d9b3e1383\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.392799 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg7z6\" (UniqueName: \"kubernetes.io/projected/a4522453-a73a-4176-b662-e75d9b3e1383-kube-api-access-mg7z6\") pod \"collect-profiles-29522565-mwcp6\" (UID: \"a4522453-a73a-4176-b662-e75d9b3e1383\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.524790 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6" Feb 17 18:45:00 crc kubenswrapper[4680]: I0217 18:45:00.979488 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6"] Feb 17 18:45:01 crc kubenswrapper[4680]: I0217 18:45:01.981755 4680 generic.go:334] "Generic (PLEG): container finished" podID="a4522453-a73a-4176-b662-e75d9b3e1383" containerID="e379e141a0ae3222b715de5142140779c9240458d19199f9832560999f6abbf8" exitCode=0 Feb 17 18:45:01 crc kubenswrapper[4680]: I0217 18:45:01.981829 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6" event={"ID":"a4522453-a73a-4176-b662-e75d9b3e1383","Type":"ContainerDied","Data":"e379e141a0ae3222b715de5142140779c9240458d19199f9832560999f6abbf8"} Feb 17 18:45:01 crc kubenswrapper[4680]: I0217 18:45:01.982102 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6" event={"ID":"a4522453-a73a-4176-b662-e75d9b3e1383","Type":"ContainerStarted","Data":"e77d83c17c3d97e32824bed7f3d1e68fa60f65c020c80e2e39aef540208dd5b5"} Feb 17 18:45:03 crc kubenswrapper[4680]: I0217 18:45:03.455935 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6" Feb 17 18:45:03 crc kubenswrapper[4680]: I0217 18:45:03.532454 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4522453-a73a-4176-b662-e75d9b3e1383-secret-volume\") pod \"a4522453-a73a-4176-b662-e75d9b3e1383\" (UID: \"a4522453-a73a-4176-b662-e75d9b3e1383\") " Feb 17 18:45:03 crc kubenswrapper[4680]: I0217 18:45:03.532594 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg7z6\" (UniqueName: \"kubernetes.io/projected/a4522453-a73a-4176-b662-e75d9b3e1383-kube-api-access-mg7z6\") pod \"a4522453-a73a-4176-b662-e75d9b3e1383\" (UID: \"a4522453-a73a-4176-b662-e75d9b3e1383\") " Feb 17 18:45:03 crc kubenswrapper[4680]: I0217 18:45:03.532679 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4522453-a73a-4176-b662-e75d9b3e1383-config-volume\") pod \"a4522453-a73a-4176-b662-e75d9b3e1383\" (UID: \"a4522453-a73a-4176-b662-e75d9b3e1383\") " Feb 17 18:45:03 crc kubenswrapper[4680]: I0217 18:45:03.533245 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4522453-a73a-4176-b662-e75d9b3e1383-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4522453-a73a-4176-b662-e75d9b3e1383" (UID: "a4522453-a73a-4176-b662-e75d9b3e1383"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:45:03 crc kubenswrapper[4680]: I0217 18:45:03.533759 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4522453-a73a-4176-b662-e75d9b3e1383-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 18:45:03 crc kubenswrapper[4680]: I0217 18:45:03.543635 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4522453-a73a-4176-b662-e75d9b3e1383-kube-api-access-mg7z6" (OuterVolumeSpecName: "kube-api-access-mg7z6") pod "a4522453-a73a-4176-b662-e75d9b3e1383" (UID: "a4522453-a73a-4176-b662-e75d9b3e1383"). InnerVolumeSpecName "kube-api-access-mg7z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:45:03 crc kubenswrapper[4680]: I0217 18:45:03.551500 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4522453-a73a-4176-b662-e75d9b3e1383-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4522453-a73a-4176-b662-e75d9b3e1383" (UID: "a4522453-a73a-4176-b662-e75d9b3e1383"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:45:03 crc kubenswrapper[4680]: I0217 18:45:03.635574 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4522453-a73a-4176-b662-e75d9b3e1383-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 18:45:03 crc kubenswrapper[4680]: I0217 18:45:03.635620 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg7z6\" (UniqueName: \"kubernetes.io/projected/a4522453-a73a-4176-b662-e75d9b3e1383-kube-api-access-mg7z6\") on node \"crc\" DevicePath \"\"" Feb 17 18:45:04 crc kubenswrapper[4680]: I0217 18:45:04.000188 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6" event={"ID":"a4522453-a73a-4176-b662-e75d9b3e1383","Type":"ContainerDied","Data":"e77d83c17c3d97e32824bed7f3d1e68fa60f65c020c80e2e39aef540208dd5b5"} Feb 17 18:45:04 crc kubenswrapper[4680]: I0217 18:45:04.000476 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e77d83c17c3d97e32824bed7f3d1e68fa60f65c020c80e2e39aef540208dd5b5" Feb 17 18:45:04 crc kubenswrapper[4680]: I0217 18:45:04.000584 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522565-mwcp6" Feb 17 18:45:04 crc kubenswrapper[4680]: I0217 18:45:04.559987 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8"] Feb 17 18:45:04 crc kubenswrapper[4680]: I0217 18:45:04.571489 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522520-z6ls8"] Feb 17 18:45:05 crc kubenswrapper[4680]: I0217 18:45:05.117590 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5" path="/var/lib/kubelet/pods/a30f4c08-ceb0-421a-b15d-9ab1dfcb21a5/volumes" Feb 17 18:45:10 crc kubenswrapper[4680]: I0217 18:45:10.106329 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:45:10 crc kubenswrapper[4680]: E0217 18:45:10.106989 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:45:22 crc kubenswrapper[4680]: I0217 18:45:22.105108 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:45:22 crc kubenswrapper[4680]: E0217 18:45:22.105909 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:45:22 crc kubenswrapper[4680]: I0217 18:45:22.270518 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5c85cf9547-qhbrg" podUID="fc8411d6-1f53-4a7f-8528-0b3719bc5a6f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 17 18:45:35 crc kubenswrapper[4680]: I0217 18:45:35.105682 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:45:35 crc kubenswrapper[4680]: E0217 18:45:35.106502 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:45:47 crc kubenswrapper[4680]: I0217 18:45:47.105994 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:45:47 crc kubenswrapper[4680]: E0217 18:45:47.106733 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:45:55 crc kubenswrapper[4680]: I0217 18:45:55.901237 4680 scope.go:117] "RemoveContainer" containerID="0ddbcda602c4a8d221f44b10a2413c97bbbe06e2ac84c5b269e3f31566713f4a" Feb 17 18:46:01 crc kubenswrapper[4680]: I0217 18:46:01.388942 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6zhf9"] Feb 17 18:46:01 crc kubenswrapper[4680]: E0217 18:46:01.389947 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4522453-a73a-4176-b662-e75d9b3e1383" containerName="collect-profiles" Feb 17 18:46:01 crc kubenswrapper[4680]: I0217 18:46:01.389964 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4522453-a73a-4176-b662-e75d9b3e1383" containerName="collect-profiles" Feb 17 18:46:01 crc kubenswrapper[4680]: I0217 18:46:01.390199 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4522453-a73a-4176-b662-e75d9b3e1383" containerName="collect-profiles" Feb 17 18:46:01 crc kubenswrapper[4680]: I0217 18:46:01.391742 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6zhf9" Feb 17 18:46:01 crc kubenswrapper[4680]: I0217 18:46:01.401734 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6zhf9"] Feb 17 18:46:01 crc kubenswrapper[4680]: I0217 18:46:01.548027 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f369607-6ecb-4248-8f43-a5875e4bc1b9-utilities\") pod \"certified-operators-6zhf9\" (UID: \"3f369607-6ecb-4248-8f43-a5875e4bc1b9\") " pod="openshift-marketplace/certified-operators-6zhf9" Feb 17 18:46:01 crc kubenswrapper[4680]: I0217 18:46:01.548146 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f369607-6ecb-4248-8f43-a5875e4bc1b9-catalog-content\") pod \"certified-operators-6zhf9\" (UID: \"3f369607-6ecb-4248-8f43-a5875e4bc1b9\") " pod="openshift-marketplace/certified-operators-6zhf9" Feb 17 18:46:01 crc kubenswrapper[4680]: I0217 18:46:01.548179 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l5bm\" (UniqueName: \"kubernetes.io/projected/3f369607-6ecb-4248-8f43-a5875e4bc1b9-kube-api-access-6l5bm\") pod \"certified-operators-6zhf9\" (UID: \"3f369607-6ecb-4248-8f43-a5875e4bc1b9\") " pod="openshift-marketplace/certified-operators-6zhf9" Feb 17 18:46:01 crc kubenswrapper[4680]: I0217 18:46:01.649752 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f369607-6ecb-4248-8f43-a5875e4bc1b9-utilities\") pod \"certified-operators-6zhf9\" (UID: \"3f369607-6ecb-4248-8f43-a5875e4bc1b9\") " pod="openshift-marketplace/certified-operators-6zhf9" Feb 17 18:46:01 crc kubenswrapper[4680]: I0217 18:46:01.649869 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f369607-6ecb-4248-8f43-a5875e4bc1b9-catalog-content\") pod \"certified-operators-6zhf9\" (UID: \"3f369607-6ecb-4248-8f43-a5875e4bc1b9\") " pod="openshift-marketplace/certified-operators-6zhf9" Feb 17 18:46:01 crc kubenswrapper[4680]: I0217 18:46:01.649900 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l5bm\" (UniqueName: \"kubernetes.io/projected/3f369607-6ecb-4248-8f43-a5875e4bc1b9-kube-api-access-6l5bm\") pod \"certified-operators-6zhf9\" (UID: \"3f369607-6ecb-4248-8f43-a5875e4bc1b9\") " pod="openshift-marketplace/certified-operators-6zhf9" Feb 17 18:46:01 crc kubenswrapper[4680]: I0217 18:46:01.650411 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f369607-6ecb-4248-8f43-a5875e4bc1b9-catalog-content\") pod \"certified-operators-6zhf9\" (UID: \"3f369607-6ecb-4248-8f43-a5875e4bc1b9\") " pod="openshift-marketplace/certified-operators-6zhf9" Feb 17 18:46:01 crc kubenswrapper[4680]: I0217 18:46:01.650646 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f369607-6ecb-4248-8f43-a5875e4bc1b9-utilities\") pod \"certified-operators-6zhf9\" (UID: \"3f369607-6ecb-4248-8f43-a5875e4bc1b9\") " pod="openshift-marketplace/certified-operators-6zhf9" Feb 17 18:46:01 crc kubenswrapper[4680]: I0217 18:46:01.674531 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l5bm\" (UniqueName: \"kubernetes.io/projected/3f369607-6ecb-4248-8f43-a5875e4bc1b9-kube-api-access-6l5bm\") pod \"certified-operators-6zhf9\" (UID: \"3f369607-6ecb-4248-8f43-a5875e4bc1b9\") " pod="openshift-marketplace/certified-operators-6zhf9" Feb 17 18:46:01 crc kubenswrapper[4680]: I0217 18:46:01.720931 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6zhf9" Feb 17 18:46:02 crc kubenswrapper[4680]: I0217 18:46:02.106998 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:46:02 crc kubenswrapper[4680]: E0217 18:46:02.107419 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:46:02 crc kubenswrapper[4680]: I0217 18:46:02.348470 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6zhf9"] Feb 17 18:46:02 crc kubenswrapper[4680]: I0217 18:46:02.525627 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zhf9" event={"ID":"3f369607-6ecb-4248-8f43-a5875e4bc1b9","Type":"ContainerStarted","Data":"a7d54e3defb470d44f1d7701e8461a3b8042c4e97b8a12e3ea021a2e658bb0fd"} Feb 17 18:46:03 crc kubenswrapper[4680]: I0217 18:46:03.536433 4680 generic.go:334] "Generic (PLEG): container finished" podID="3f369607-6ecb-4248-8f43-a5875e4bc1b9" containerID="7f39bfb294dd0c94bcdd9ce52ef5c49332ef95d90ecd03442e7416839089069d" exitCode=0 Feb 17 18:46:03 crc kubenswrapper[4680]: I0217 18:46:03.536469 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zhf9" event={"ID":"3f369607-6ecb-4248-8f43-a5875e4bc1b9","Type":"ContainerDied","Data":"7f39bfb294dd0c94bcdd9ce52ef5c49332ef95d90ecd03442e7416839089069d"} Feb 17 18:46:03 crc kubenswrapper[4680]: I0217 18:46:03.538818 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 18:46:04 crc kubenswrapper[4680]: I0217 18:46:04.560823 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zhf9" event={"ID":"3f369607-6ecb-4248-8f43-a5875e4bc1b9","Type":"ContainerStarted","Data":"8f65c9df68c9fbfeca4dc60f0f23257b5aac950af3e33cc7fd0069dcd166d556"} Feb 17 18:46:06 crc kubenswrapper[4680]: I0217 18:46:06.579877 4680 generic.go:334] "Generic (PLEG): container finished" podID="3f369607-6ecb-4248-8f43-a5875e4bc1b9" containerID="8f65c9df68c9fbfeca4dc60f0f23257b5aac950af3e33cc7fd0069dcd166d556" exitCode=0 Feb 17 18:46:06 crc kubenswrapper[4680]: I0217 18:46:06.579954 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zhf9" event={"ID":"3f369607-6ecb-4248-8f43-a5875e4bc1b9","Type":"ContainerDied","Data":"8f65c9df68c9fbfeca4dc60f0f23257b5aac950af3e33cc7fd0069dcd166d556"} Feb 17 18:46:07 crc kubenswrapper[4680]: I0217 18:46:07.590703 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zhf9" event={"ID":"3f369607-6ecb-4248-8f43-a5875e4bc1b9","Type":"ContainerStarted","Data":"6eadfd0fc5f6c306988364ba21a479cc3ebe415f283c7187464afed0e54aa462"} Feb 17 18:46:07 crc kubenswrapper[4680]: I0217 18:46:07.618239 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6zhf9" podStartSLOduration=3.165480542 podStartE2EDuration="6.61821338s" podCreationTimestamp="2026-02-17 18:46:01 +0000 UTC" firstStartedPulling="2026-02-17 18:46:03.538419734 +0000 UTC m=+3693.089318413" lastFinishedPulling="2026-02-17 18:46:06.991152572 +0000 UTC m=+3696.542051251" observedRunningTime="2026-02-17 18:46:07.609104675 +0000 UTC m=+3697.160003354" watchObservedRunningTime="2026-02-17 18:46:07.61821338 +0000 UTC m=+3697.169112069" Feb 17 18:46:11 crc kubenswrapper[4680]: I0217 18:46:11.722489 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6zhf9" Feb 17 18:46:11 crc kubenswrapper[4680]: I0217 18:46:11.722983 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6zhf9" Feb 17 18:46:11 crc kubenswrapper[4680]: I0217 18:46:11.769567 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6zhf9" Feb 17 18:46:12 crc kubenswrapper[4680]: I0217 18:46:12.710382 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6zhf9" Feb 17 18:46:12 crc kubenswrapper[4680]: I0217 18:46:12.983145 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6zhf9"] Feb 17 18:46:14 crc kubenswrapper[4680]: I0217 18:46:14.650931 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6zhf9" podUID="3f369607-6ecb-4248-8f43-a5875e4bc1b9" containerName="registry-server" containerID="cri-o://6eadfd0fc5f6c306988364ba21a479cc3ebe415f283c7187464afed0e54aa462" gracePeriod=2 Feb 17 18:46:15 crc kubenswrapper[4680]: I0217 18:46:15.661684 4680 generic.go:334] "Generic (PLEG): container finished" podID="3f369607-6ecb-4248-8f43-a5875e4bc1b9" containerID="6eadfd0fc5f6c306988364ba21a479cc3ebe415f283c7187464afed0e54aa462" exitCode=0 Feb 17 18:46:15 crc kubenswrapper[4680]: I0217 18:46:15.661786 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zhf9" event={"ID":"3f369607-6ecb-4248-8f43-a5875e4bc1b9","Type":"ContainerDied","Data":"6eadfd0fc5f6c306988364ba21a479cc3ebe415f283c7187464afed0e54aa462"} Feb 17 18:46:15 crc kubenswrapper[4680]: I0217 18:46:15.662544 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zhf9" event={"ID":"3f369607-6ecb-4248-8f43-a5875e4bc1b9","Type":"ContainerDied","Data":"a7d54e3defb470d44f1d7701e8461a3b8042c4e97b8a12e3ea021a2e658bb0fd"} Feb 17 18:46:15 crc kubenswrapper[4680]: I0217 18:46:15.662569 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7d54e3defb470d44f1d7701e8461a3b8042c4e97b8a12e3ea021a2e658bb0fd" Feb 17 18:46:15 crc kubenswrapper[4680]: I0217 18:46:15.692929 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6zhf9" Feb 17 18:46:15 crc kubenswrapper[4680]: I0217 18:46:15.833288 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f369607-6ecb-4248-8f43-a5875e4bc1b9-utilities\") pod \"3f369607-6ecb-4248-8f43-a5875e4bc1b9\" (UID: \"3f369607-6ecb-4248-8f43-a5875e4bc1b9\") " Feb 17 18:46:15 crc kubenswrapper[4680]: I0217 18:46:15.833407 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f369607-6ecb-4248-8f43-a5875e4bc1b9-catalog-content\") pod \"3f369607-6ecb-4248-8f43-a5875e4bc1b9\" (UID: \"3f369607-6ecb-4248-8f43-a5875e4bc1b9\") " Feb 17 18:46:15 crc kubenswrapper[4680]: I0217 18:46:15.833501 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l5bm\" (UniqueName: \"kubernetes.io/projected/3f369607-6ecb-4248-8f43-a5875e4bc1b9-kube-api-access-6l5bm\") pod \"3f369607-6ecb-4248-8f43-a5875e4bc1b9\" (UID: \"3f369607-6ecb-4248-8f43-a5875e4bc1b9\") " Feb 17 18:46:15 crc kubenswrapper[4680]: I0217 18:46:15.834669 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f369607-6ecb-4248-8f43-a5875e4bc1b9-utilities" (OuterVolumeSpecName: "utilities") pod "3f369607-6ecb-4248-8f43-a5875e4bc1b9" (UID: "3f369607-6ecb-4248-8f43-a5875e4bc1b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:46:15 crc kubenswrapper[4680]: I0217 18:46:15.843118 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f369607-6ecb-4248-8f43-a5875e4bc1b9-kube-api-access-6l5bm" (OuterVolumeSpecName: "kube-api-access-6l5bm") pod "3f369607-6ecb-4248-8f43-a5875e4bc1b9" (UID: "3f369607-6ecb-4248-8f43-a5875e4bc1b9"). InnerVolumeSpecName "kube-api-access-6l5bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:46:15 crc kubenswrapper[4680]: I0217 18:46:15.888034 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f369607-6ecb-4248-8f43-a5875e4bc1b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f369607-6ecb-4248-8f43-a5875e4bc1b9" (UID: "3f369607-6ecb-4248-8f43-a5875e4bc1b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:46:15 crc kubenswrapper[4680]: I0217 18:46:15.938000 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f369607-6ecb-4248-8f43-a5875e4bc1b9-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:46:15 crc kubenswrapper[4680]: I0217 18:46:15.938041 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f369607-6ecb-4248-8f43-a5875e4bc1b9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:46:15 crc kubenswrapper[4680]: I0217 18:46:15.938054 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l5bm\" (UniqueName: \"kubernetes.io/projected/3f369607-6ecb-4248-8f43-a5875e4bc1b9-kube-api-access-6l5bm\") on node \"crc\" DevicePath \"\"" Feb 17 18:46:16 crc kubenswrapper[4680]: I0217 18:46:16.107629 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:46:16 crc kubenswrapper[4680]: E0217 18:46:16.107958 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:46:16 crc kubenswrapper[4680]: I0217 18:46:16.670828 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6zhf9" Feb 17 18:46:16 crc kubenswrapper[4680]: I0217 18:46:16.726554 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6zhf9"] Feb 17 18:46:16 crc kubenswrapper[4680]: I0217 18:46:16.738387 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6zhf9"] Feb 17 18:46:17 crc kubenswrapper[4680]: I0217 18:46:17.116589 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f369607-6ecb-4248-8f43-a5875e4bc1b9" path="/var/lib/kubelet/pods/3f369607-6ecb-4248-8f43-a5875e4bc1b9/volumes" Feb 17 18:46:30 crc kubenswrapper[4680]: I0217 18:46:30.105617 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:46:30 crc kubenswrapper[4680]: E0217 18:46:30.106374 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:46:43 crc kubenswrapper[4680]: I0217 18:46:43.106128 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:46:43 crc kubenswrapper[4680]: E0217 18:46:43.107648 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:46:54 crc kubenswrapper[4680]: I0217 18:46:54.105192 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:46:54 crc kubenswrapper[4680]: E0217 18:46:54.105915 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:47:08 crc kubenswrapper[4680]: I0217 18:47:08.105470 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:47:08 crc kubenswrapper[4680]: E0217 18:47:08.107831 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:47:22 crc kubenswrapper[4680]: I0217 18:47:22.106063 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:47:22 crc kubenswrapper[4680]: E0217 18:47:22.107110 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:47:35 crc kubenswrapper[4680]: I0217 18:47:35.107724 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:47:35 crc kubenswrapper[4680]: E0217 18:47:35.108594 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:47:47 crc kubenswrapper[4680]: I0217 18:47:47.105743 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:47:47 crc kubenswrapper[4680]: E0217 18:47:47.106514 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:47:56 crc kubenswrapper[4680]: I0217 18:47:56.013270 4680 scope.go:117] "RemoveContainer" containerID="75c0ac3def3064a7b8602251535cef55af725dec8da048a2985ef274bc435b0a" Feb 17 18:47:56 crc kubenswrapper[4680]: I0217 18:47:56.035967 4680 scope.go:117] "RemoveContainer" containerID="ce4781358cb6852acc901ad2b4906810cdec84de674fb51fb525e0dd37cc456e" Feb 17 18:47:56 crc kubenswrapper[4680]: I0217 18:47:56.095061 4680 scope.go:117] "RemoveContainer" containerID="c0a70f68f77b64cfdc8f61a95eeb3bd81d1ee93a3e7a5604c2f112a21503a84c" Feb 17 18:48:02 crc kubenswrapper[4680]: I0217 18:48:02.105077 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:48:02 crc kubenswrapper[4680]: E0217 18:48:02.105961 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:48:16 crc kubenswrapper[4680]: I0217 18:48:16.105700 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:48:16 crc kubenswrapper[4680]: E0217 18:48:16.106519 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:48:31 crc kubenswrapper[4680]: I0217 18:48:31.113386 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:48:31 crc kubenswrapper[4680]: E0217 18:48:31.114831 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:48:43 crc kubenswrapper[4680]: I0217 18:48:43.105578 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:48:43 crc kubenswrapper[4680]: E0217 18:48:43.106615 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:48:55 crc kubenswrapper[4680]: I0217 18:48:55.105946 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:48:55 crc kubenswrapper[4680]: E0217 18:48:55.106637 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:49:10 crc kubenswrapper[4680]: I0217 18:49:10.105013 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:49:10 crc kubenswrapper[4680]: I0217 18:49:10.494295 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"c2588e31b0a321d94a85ae0410d3d021deaa574d637ba38f421c735456e240e7"} Feb 17 18:51:02 crc kubenswrapper[4680]: I0217 18:51:02.779645 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d4db4"] Feb 17 18:51:02 crc kubenswrapper[4680]: E0217 18:51:02.780964 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f369607-6ecb-4248-8f43-a5875e4bc1b9" containerName="extract-utilities" Feb 17 18:51:02 crc kubenswrapper[4680]: I0217 18:51:02.780983 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f369607-6ecb-4248-8f43-a5875e4bc1b9" containerName="extract-utilities" Feb 17 18:51:02 crc kubenswrapper[4680]: E0217 18:51:02.781030 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f369607-6ecb-4248-8f43-a5875e4bc1b9" containerName="extract-content" Feb 17 18:51:02 crc kubenswrapper[4680]: I0217 18:51:02.781039 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f369607-6ecb-4248-8f43-a5875e4bc1b9" containerName="extract-content" Feb 17 18:51:02 crc kubenswrapper[4680]: E0217 18:51:02.781052 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f369607-6ecb-4248-8f43-a5875e4bc1b9" containerName="registry-server" Feb 17 18:51:02 crc kubenswrapper[4680]: I0217 18:51:02.781059 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f369607-6ecb-4248-8f43-a5875e4bc1b9" containerName="registry-server" Feb 17 18:51:02 crc kubenswrapper[4680]: I0217 18:51:02.781281 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f369607-6ecb-4248-8f43-a5875e4bc1b9" containerName="registry-server" Feb 17 18:51:02 crc kubenswrapper[4680]: I0217 18:51:02.782892 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d4db4" Feb 17 18:51:02 crc kubenswrapper[4680]: I0217 18:51:02.796614 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d4db4"] Feb 17 18:51:02 crc kubenswrapper[4680]: I0217 18:51:02.942680 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-catalog-content\") pod \"community-operators-d4db4\" (UID: \"d1db8fb3-ba63-4be7-8046-3bb8d15728e5\") " pod="openshift-marketplace/community-operators-d4db4" Feb 17 18:51:02 crc kubenswrapper[4680]: I0217 18:51:02.942963 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-utilities\") pod \"community-operators-d4db4\" (UID: \"d1db8fb3-ba63-4be7-8046-3bb8d15728e5\") " pod="openshift-marketplace/community-operators-d4db4" Feb 17 18:51:02 crc kubenswrapper[4680]: I0217 18:51:02.943046 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h58jf\" (UniqueName: \"kubernetes.io/projected/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-kube-api-access-h58jf\") pod \"community-operators-d4db4\" (UID: \"d1db8fb3-ba63-4be7-8046-3bb8d15728e5\") " pod="openshift-marketplace/community-operators-d4db4" Feb 17 18:51:03 crc kubenswrapper[4680]: I0217 18:51:03.045253 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h58jf\" (UniqueName: \"kubernetes.io/projected/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-kube-api-access-h58jf\") pod \"community-operators-d4db4\" (UID: \"d1db8fb3-ba63-4be7-8046-3bb8d15728e5\") " pod="openshift-marketplace/community-operators-d4db4" Feb 17 18:51:03 crc kubenswrapper[4680]: I0217 18:51:03.045626 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-catalog-content\") pod \"community-operators-d4db4\" (UID: \"d1db8fb3-ba63-4be7-8046-3bb8d15728e5\") " pod="openshift-marketplace/community-operators-d4db4" Feb 17 18:51:03 crc kubenswrapper[4680]: I0217 18:51:03.045698 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-utilities\") pod \"community-operators-d4db4\" (UID: \"d1db8fb3-ba63-4be7-8046-3bb8d15728e5\") " pod="openshift-marketplace/community-operators-d4db4" Feb 17 18:51:03 crc kubenswrapper[4680]: I0217 18:51:03.046197 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-catalog-content\") pod \"community-operators-d4db4\" (UID: \"d1db8fb3-ba63-4be7-8046-3bb8d15728e5\") " pod="openshift-marketplace/community-operators-d4db4" Feb 17 18:51:03 crc kubenswrapper[4680]: I0217 18:51:03.046202 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-utilities\") pod \"community-operators-d4db4\" (UID: \"d1db8fb3-ba63-4be7-8046-3bb8d15728e5\") " pod="openshift-marketplace/community-operators-d4db4" Feb 17 18:51:03 crc kubenswrapper[4680]: I0217 18:51:03.072757 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h58jf\" (UniqueName: \"kubernetes.io/projected/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-kube-api-access-h58jf\") pod \"community-operators-d4db4\" (UID: \"d1db8fb3-ba63-4be7-8046-3bb8d15728e5\") " pod="openshift-marketplace/community-operators-d4db4" Feb 17 18:51:03 crc kubenswrapper[4680]: I0217 18:51:03.102468 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d4db4" Feb 17 18:51:03 crc kubenswrapper[4680]: I0217 18:51:03.672713 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d4db4"] Feb 17 18:51:04 crc kubenswrapper[4680]: I0217 18:51:04.103914 4680 generic.go:334] "Generic (PLEG): container finished" podID="d1db8fb3-ba63-4be7-8046-3bb8d15728e5" containerID="26007a2217d68491dcfd5599e7dd6cd246769fb43ebc2039226355f1a0e5ddad" exitCode=0 Feb 17 18:51:04 crc kubenswrapper[4680]: I0217 18:51:04.103957 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d4db4" event={"ID":"d1db8fb3-ba63-4be7-8046-3bb8d15728e5","Type":"ContainerDied","Data":"26007a2217d68491dcfd5599e7dd6cd246769fb43ebc2039226355f1a0e5ddad"} Feb 17 18:51:04 crc kubenswrapper[4680]: I0217 18:51:04.104200 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d4db4" event={"ID":"d1db8fb3-ba63-4be7-8046-3bb8d15728e5","Type":"ContainerStarted","Data":"e908db140a687d2ce546535ca08786ee85583ad410796b241c9dd5a4b453de86"} Feb 17 18:51:04 crc kubenswrapper[4680]: I0217 18:51:04.106858 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 18:51:05 crc kubenswrapper[4680]: I0217 18:51:05.126289 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d4db4" event={"ID":"d1db8fb3-ba63-4be7-8046-3bb8d15728e5","Type":"ContainerStarted","Data":"867110a32723edf754d66f8d7ba2d5f79d7fa7682cd5b297ec238f14b3224c4c"} Feb 17 18:51:07 crc kubenswrapper[4680]: I0217 18:51:07.146840 4680 generic.go:334] "Generic (PLEG): container finished" podID="d1db8fb3-ba63-4be7-8046-3bb8d15728e5" containerID="867110a32723edf754d66f8d7ba2d5f79d7fa7682cd5b297ec238f14b3224c4c" exitCode=0 Feb 17 18:51:07 crc kubenswrapper[4680]: I0217 18:51:07.147455 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d4db4" event={"ID":"d1db8fb3-ba63-4be7-8046-3bb8d15728e5","Type":"ContainerDied","Data":"867110a32723edf754d66f8d7ba2d5f79d7fa7682cd5b297ec238f14b3224c4c"} Feb 17 18:51:08 crc kubenswrapper[4680]: I0217 18:51:08.156198 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d4db4" event={"ID":"d1db8fb3-ba63-4be7-8046-3bb8d15728e5","Type":"ContainerStarted","Data":"a0ba8b740dca5e3436610b50711d1e53b834474b5fe9b841b426e37a1b525a51"} Feb 17 18:51:08 crc kubenswrapper[4680]: I0217 18:51:08.190598 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d4db4" podStartSLOduration=2.770385211 podStartE2EDuration="6.19057969s" podCreationTimestamp="2026-02-17 18:51:02 +0000 UTC" firstStartedPulling="2026-02-17 18:51:04.106565452 +0000 UTC m=+3993.657464131" lastFinishedPulling="2026-02-17 18:51:07.526759941 +0000 UTC m=+3997.077658610" observedRunningTime="2026-02-17 18:51:08.18675907 +0000 UTC m=+3997.737657769" watchObservedRunningTime="2026-02-17 18:51:08.19057969 +0000 UTC m=+3997.741478369" Feb 17 18:51:13 crc kubenswrapper[4680]: I0217 18:51:13.103451 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d4db4" Feb 17 18:51:13 crc kubenswrapper[4680]: I0217 18:51:13.103879 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d4db4" Feb 17 18:51:13 crc kubenswrapper[4680]: I0217 18:51:13.178639 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d4db4" Feb 17 18:51:13 crc kubenswrapper[4680]: I0217 18:51:13.273381 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d4db4" Feb 17 18:51:13 crc kubenswrapper[4680]: I0217 18:51:13.424825 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d4db4"] Feb 17 18:51:15 crc kubenswrapper[4680]: I0217 18:51:15.236655 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d4db4" podUID="d1db8fb3-ba63-4be7-8046-3bb8d15728e5" containerName="registry-server" containerID="cri-o://a0ba8b740dca5e3436610b50711d1e53b834474b5fe9b841b426e37a1b525a51" gracePeriod=2 Feb 17 18:51:15 crc kubenswrapper[4680]: I0217 18:51:15.733526 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d4db4" Feb 17 18:51:15 crc kubenswrapper[4680]: I0217 18:51:15.907751 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-utilities\") pod \"d1db8fb3-ba63-4be7-8046-3bb8d15728e5\" (UID: \"d1db8fb3-ba63-4be7-8046-3bb8d15728e5\") " Feb 17 18:51:15 crc kubenswrapper[4680]: I0217 18:51:15.908087 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-catalog-content\") pod \"d1db8fb3-ba63-4be7-8046-3bb8d15728e5\" (UID: \"d1db8fb3-ba63-4be7-8046-3bb8d15728e5\") " Feb 17 18:51:15 crc kubenswrapper[4680]: I0217 18:51:15.908161 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h58jf\" (UniqueName: \"kubernetes.io/projected/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-kube-api-access-h58jf\") pod \"d1db8fb3-ba63-4be7-8046-3bb8d15728e5\" (UID: \"d1db8fb3-ba63-4be7-8046-3bb8d15728e5\") " Feb 17 18:51:15 crc kubenswrapper[4680]: I0217 18:51:15.909003 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-utilities" (OuterVolumeSpecName: "utilities") pod "d1db8fb3-ba63-4be7-8046-3bb8d15728e5" (UID: "d1db8fb3-ba63-4be7-8046-3bb8d15728e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:51:15 crc kubenswrapper[4680]: I0217 18:51:15.913296 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-kube-api-access-h58jf" (OuterVolumeSpecName: "kube-api-access-h58jf") pod "d1db8fb3-ba63-4be7-8046-3bb8d15728e5" (UID: "d1db8fb3-ba63-4be7-8046-3bb8d15728e5"). InnerVolumeSpecName "kube-api-access-h58jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:51:15 crc kubenswrapper[4680]: I0217 18:51:15.960223 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1db8fb3-ba63-4be7-8046-3bb8d15728e5" (UID: "d1db8fb3-ba63-4be7-8046-3bb8d15728e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.010018 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.010051 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.010062 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h58jf\" (UniqueName: \"kubernetes.io/projected/d1db8fb3-ba63-4be7-8046-3bb8d15728e5-kube-api-access-h58jf\") on node \"crc\" DevicePath \"\"" Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.248897 4680 generic.go:334] "Generic (PLEG): container finished" podID="d1db8fb3-ba63-4be7-8046-3bb8d15728e5" containerID="a0ba8b740dca5e3436610b50711d1e53b834474b5fe9b841b426e37a1b525a51" exitCode=0 Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.248974 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d4db4" Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.248971 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d4db4" event={"ID":"d1db8fb3-ba63-4be7-8046-3bb8d15728e5","Type":"ContainerDied","Data":"a0ba8b740dca5e3436610b50711d1e53b834474b5fe9b841b426e37a1b525a51"} Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.249030 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d4db4" event={"ID":"d1db8fb3-ba63-4be7-8046-3bb8d15728e5","Type":"ContainerDied","Data":"e908db140a687d2ce546535ca08786ee85583ad410796b241c9dd5a4b453de86"} Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.249052 4680 scope.go:117] "RemoveContainer" containerID="a0ba8b740dca5e3436610b50711d1e53b834474b5fe9b841b426e37a1b525a51" Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.275509 4680 scope.go:117] "RemoveContainer" containerID="867110a32723edf754d66f8d7ba2d5f79d7fa7682cd5b297ec238f14b3224c4c" Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.293054 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d4db4"] Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.303422 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d4db4"] Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.303583 4680 scope.go:117] "RemoveContainer" containerID="26007a2217d68491dcfd5599e7dd6cd246769fb43ebc2039226355f1a0e5ddad" Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.343669 4680 scope.go:117] "RemoveContainer" containerID="a0ba8b740dca5e3436610b50711d1e53b834474b5fe9b841b426e37a1b525a51" Feb 17 18:51:16 crc kubenswrapper[4680]: E0217 18:51:16.344634 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0ba8b740dca5e3436610b50711d1e53b834474b5fe9b841b426e37a1b525a51\": container with ID starting with a0ba8b740dca5e3436610b50711d1e53b834474b5fe9b841b426e37a1b525a51 not found: ID does not exist" containerID="a0ba8b740dca5e3436610b50711d1e53b834474b5fe9b841b426e37a1b525a51" Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.344668 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0ba8b740dca5e3436610b50711d1e53b834474b5fe9b841b426e37a1b525a51"} err="failed to get container status \"a0ba8b740dca5e3436610b50711d1e53b834474b5fe9b841b426e37a1b525a51\": rpc error: code = NotFound desc = could not find container \"a0ba8b740dca5e3436610b50711d1e53b834474b5fe9b841b426e37a1b525a51\": container with ID starting with a0ba8b740dca5e3436610b50711d1e53b834474b5fe9b841b426e37a1b525a51 not found: ID does not exist" Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.344690 4680 scope.go:117] "RemoveContainer" containerID="867110a32723edf754d66f8d7ba2d5f79d7fa7682cd5b297ec238f14b3224c4c" Feb 17 18:51:16 crc kubenswrapper[4680]: E0217 18:51:16.345151 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"867110a32723edf754d66f8d7ba2d5f79d7fa7682cd5b297ec238f14b3224c4c\": container with ID starting with 867110a32723edf754d66f8d7ba2d5f79d7fa7682cd5b297ec238f14b3224c4c not found: ID does not exist" containerID="867110a32723edf754d66f8d7ba2d5f79d7fa7682cd5b297ec238f14b3224c4c" Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.345181 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"867110a32723edf754d66f8d7ba2d5f79d7fa7682cd5b297ec238f14b3224c4c"} err="failed to get container status \"867110a32723edf754d66f8d7ba2d5f79d7fa7682cd5b297ec238f14b3224c4c\": rpc error: code = NotFound desc = could not find container \"867110a32723edf754d66f8d7ba2d5f79d7fa7682cd5b297ec238f14b3224c4c\": container with ID starting with 867110a32723edf754d66f8d7ba2d5f79d7fa7682cd5b297ec238f14b3224c4c not found: ID does not exist" Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.345198 4680 scope.go:117] "RemoveContainer" containerID="26007a2217d68491dcfd5599e7dd6cd246769fb43ebc2039226355f1a0e5ddad" Feb 17 18:51:16 crc kubenswrapper[4680]: E0217 18:51:16.345674 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26007a2217d68491dcfd5599e7dd6cd246769fb43ebc2039226355f1a0e5ddad\": container with ID starting with 26007a2217d68491dcfd5599e7dd6cd246769fb43ebc2039226355f1a0e5ddad not found: ID does not exist" containerID="26007a2217d68491dcfd5599e7dd6cd246769fb43ebc2039226355f1a0e5ddad" Feb 17 18:51:16 crc kubenswrapper[4680]: I0217 18:51:16.345719 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26007a2217d68491dcfd5599e7dd6cd246769fb43ebc2039226355f1a0e5ddad"} err="failed to get container status \"26007a2217d68491dcfd5599e7dd6cd246769fb43ebc2039226355f1a0e5ddad\": rpc error: code = NotFound desc = could not find container \"26007a2217d68491dcfd5599e7dd6cd246769fb43ebc2039226355f1a0e5ddad\": container with ID starting with 26007a2217d68491dcfd5599e7dd6cd246769fb43ebc2039226355f1a0e5ddad not found: ID does not exist" Feb 17 18:51:17 crc kubenswrapper[4680]: I0217 18:51:17.128683 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1db8fb3-ba63-4be7-8046-3bb8d15728e5" path="/var/lib/kubelet/pods/d1db8fb3-ba63-4be7-8046-3bb8d15728e5/volumes" Feb 17 18:51:35 crc kubenswrapper[4680]: I0217 18:51:35.588836 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:51:35 crc kubenswrapper[4680]: I0217 18:51:35.589488 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:52:05 crc kubenswrapper[4680]: I0217 18:52:05.588979 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:52:05 crc kubenswrapper[4680]: I0217 18:52:05.589502 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:52:35 crc kubenswrapper[4680]: I0217 18:52:35.588817 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:52:35 crc kubenswrapper[4680]: I0217 18:52:35.589705 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:52:35 crc kubenswrapper[4680]: I0217 18:52:35.589764 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 18:52:35 crc kubenswrapper[4680]: I0217 18:52:35.590841 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2588e31b0a321d94a85ae0410d3d021deaa574d637ba38f421c735456e240e7"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 18:52:35 crc kubenswrapper[4680]: I0217 18:52:35.590919 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://c2588e31b0a321d94a85ae0410d3d021deaa574d637ba38f421c735456e240e7" gracePeriod=600 Feb 17 18:52:35 crc kubenswrapper[4680]: I0217 18:52:35.997758 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="c2588e31b0a321d94a85ae0410d3d021deaa574d637ba38f421c735456e240e7" exitCode=0 Feb 17 18:52:35 crc kubenswrapper[4680]: I0217 18:52:35.997834 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"c2588e31b0a321d94a85ae0410d3d021deaa574d637ba38f421c735456e240e7"} Feb 17 18:52:35 crc kubenswrapper[4680]: I0217 18:52:35.998129 4680 scope.go:117] "RemoveContainer" containerID="10d08de0ee827a1b4b9d81d0e6ba29cabc2146b62e41f3a140223a4231a52560" Feb 17 18:52:37 crc kubenswrapper[4680]: I0217 18:52:37.007207 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151"} Feb 17 18:52:56 crc kubenswrapper[4680]: I0217 18:52:56.243626 4680 scope.go:117] "RemoveContainer" containerID="7f39bfb294dd0c94bcdd9ce52ef5c49332ef95d90ecd03442e7416839089069d" Feb 17 18:52:56 crc kubenswrapper[4680]: I0217 18:52:56.282306 4680 scope.go:117] "RemoveContainer" containerID="6eadfd0fc5f6c306988364ba21a479cc3ebe415f283c7187464afed0e54aa462" Feb 17 18:52:56 crc kubenswrapper[4680]: I0217 18:52:56.314071 4680 scope.go:117] "RemoveContainer" containerID="8f65c9df68c9fbfeca4dc60f0f23257b5aac950af3e33cc7fd0069dcd166d556" Feb 17 18:53:43 crc kubenswrapper[4680]: I0217 18:53:43.733222 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nxnr6"] Feb 17 18:53:43 crc kubenswrapper[4680]: E0217 18:53:43.734224 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1db8fb3-ba63-4be7-8046-3bb8d15728e5" containerName="registry-server" Feb 17 18:53:43 crc kubenswrapper[4680]: I0217 18:53:43.734242 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1db8fb3-ba63-4be7-8046-3bb8d15728e5" containerName="registry-server" Feb 17 18:53:43 crc kubenswrapper[4680]: E0217 18:53:43.734257 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1db8fb3-ba63-4be7-8046-3bb8d15728e5" containerName="extract-utilities" Feb 17 18:53:43 crc kubenswrapper[4680]: I0217 18:53:43.734264 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1db8fb3-ba63-4be7-8046-3bb8d15728e5" containerName="extract-utilities" Feb 17 18:53:43 crc kubenswrapper[4680]: E0217 18:53:43.734296 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1db8fb3-ba63-4be7-8046-3bb8d15728e5" containerName="extract-content" Feb 17 18:53:43 crc kubenswrapper[4680]: I0217 18:53:43.734303 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1db8fb3-ba63-4be7-8046-3bb8d15728e5" containerName="extract-content" Feb 17 18:53:43 crc kubenswrapper[4680]: I0217 18:53:43.734527 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1db8fb3-ba63-4be7-8046-3bb8d15728e5" containerName="registry-server" Feb 17 18:53:43 crc kubenswrapper[4680]: I0217 18:53:43.735924 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxnr6" Feb 17 18:53:43 crc kubenswrapper[4680]: I0217 18:53:43.783090 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nxnr6"] Feb 17 18:53:43 crc kubenswrapper[4680]: I0217 18:53:43.891579 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cm8n\" (UniqueName: \"kubernetes.io/projected/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-kube-api-access-7cm8n\") pod \"redhat-operators-nxnr6\" (UID: \"6f19eb54-2f7f-4e5d-b77d-17c96534a66a\") " pod="openshift-marketplace/redhat-operators-nxnr6" Feb 17 18:53:43 crc kubenswrapper[4680]: I0217 18:53:43.891653 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-utilities\") pod \"redhat-operators-nxnr6\" (UID: \"6f19eb54-2f7f-4e5d-b77d-17c96534a66a\") " pod="openshift-marketplace/redhat-operators-nxnr6" Feb 17 18:53:43 crc kubenswrapper[4680]: I0217 18:53:43.891780 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-catalog-content\") pod \"redhat-operators-nxnr6\" (UID: \"6f19eb54-2f7f-4e5d-b77d-17c96534a66a\") " pod="openshift-marketplace/redhat-operators-nxnr6" Feb 17 18:53:43 crc kubenswrapper[4680]: I0217 18:53:43.993207 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cm8n\" (UniqueName: \"kubernetes.io/projected/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-kube-api-access-7cm8n\") pod \"redhat-operators-nxnr6\" (UID: \"6f19eb54-2f7f-4e5d-b77d-17c96534a66a\") " pod="openshift-marketplace/redhat-operators-nxnr6" Feb 17 18:53:43 crc kubenswrapper[4680]: I0217 18:53:43.993577 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-utilities\") pod \"redhat-operators-nxnr6\" (UID: \"6f19eb54-2f7f-4e5d-b77d-17c96534a66a\") " pod="openshift-marketplace/redhat-operators-nxnr6" Feb 17 18:53:43 crc kubenswrapper[4680]: I0217 18:53:43.993673 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-catalog-content\") pod \"redhat-operators-nxnr6\" (UID: \"6f19eb54-2f7f-4e5d-b77d-17c96534a66a\") " pod="openshift-marketplace/redhat-operators-nxnr6" Feb 17 18:53:43 crc kubenswrapper[4680]: I0217 18:53:43.994394 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-utilities\") pod \"redhat-operators-nxnr6\" (UID: \"6f19eb54-2f7f-4e5d-b77d-17c96534a66a\") " pod="openshift-marketplace/redhat-operators-nxnr6" Feb 17 18:53:43 crc kubenswrapper[4680]: I0217 18:53:43.994401 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-catalog-content\") pod \"redhat-operators-nxnr6\" (UID: \"6f19eb54-2f7f-4e5d-b77d-17c96534a66a\") " pod="openshift-marketplace/redhat-operators-nxnr6" Feb 17 18:53:44 crc kubenswrapper[4680]: I0217 18:53:44.026374 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cm8n\" (UniqueName: \"kubernetes.io/projected/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-kube-api-access-7cm8n\") pod \"redhat-operators-nxnr6\" (UID: \"6f19eb54-2f7f-4e5d-b77d-17c96534a66a\") " pod="openshift-marketplace/redhat-operators-nxnr6" Feb 17 18:53:44 crc kubenswrapper[4680]: I0217 18:53:44.075983 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxnr6" Feb 17 18:53:44 crc kubenswrapper[4680]: W0217 18:53:44.718165 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f19eb54_2f7f_4e5d_b77d_17c96534a66a.slice/crio-b804a493e408aed608baf70330206e51c2bb99c573127bd90e993d653820fd89 WatchSource:0}: Error finding container b804a493e408aed608baf70330206e51c2bb99c573127bd90e993d653820fd89: Status 404 returned error can't find the container with id b804a493e408aed608baf70330206e51c2bb99c573127bd90e993d653820fd89 Feb 17 18:53:44 crc kubenswrapper[4680]: I0217 18:53:44.723648 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nxnr6"] Feb 17 18:53:45 crc kubenswrapper[4680]: I0217 18:53:45.569705 4680 generic.go:334] "Generic (PLEG): container finished" podID="6f19eb54-2f7f-4e5d-b77d-17c96534a66a" containerID="d25a230fab1f4390273663f7357a8c2b39e9a9361c41df3e21f4ac90856ca2f5" exitCode=0 Feb 17 18:53:45 crc kubenswrapper[4680]: I0217 18:53:45.570335 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxnr6" event={"ID":"6f19eb54-2f7f-4e5d-b77d-17c96534a66a","Type":"ContainerDied","Data":"d25a230fab1f4390273663f7357a8c2b39e9a9361c41df3e21f4ac90856ca2f5"} Feb 17 18:53:45 crc kubenswrapper[4680]: I0217 18:53:45.574927 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxnr6" event={"ID":"6f19eb54-2f7f-4e5d-b77d-17c96534a66a","Type":"ContainerStarted","Data":"b804a493e408aed608baf70330206e51c2bb99c573127bd90e993d653820fd89"} Feb 17 18:53:46 crc kubenswrapper[4680]: I0217 18:53:46.591938 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxnr6" event={"ID":"6f19eb54-2f7f-4e5d-b77d-17c96534a66a","Type":"ContainerStarted","Data":"6721857dbc19f1fff06a64da7ff418f57fd8487841d33a131761842746f6c318"} Feb 17 18:53:50 crc kubenswrapper[4680]: I0217 18:53:50.624585 4680 generic.go:334] "Generic (PLEG): container finished" podID="6f19eb54-2f7f-4e5d-b77d-17c96534a66a" containerID="6721857dbc19f1fff06a64da7ff418f57fd8487841d33a131761842746f6c318" exitCode=0 Feb 17 18:53:50 crc kubenswrapper[4680]: I0217 18:53:50.624663 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxnr6" event={"ID":"6f19eb54-2f7f-4e5d-b77d-17c96534a66a","Type":"ContainerDied","Data":"6721857dbc19f1fff06a64da7ff418f57fd8487841d33a131761842746f6c318"} Feb 17 18:53:52 crc kubenswrapper[4680]: I0217 18:53:52.649818 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxnr6" event={"ID":"6f19eb54-2f7f-4e5d-b77d-17c96534a66a","Type":"ContainerStarted","Data":"ed9e0cbf2e4d9ba967e7fc5193bb1c4a412cf311fe6b7403f18a5acd3417ff5d"} Feb 17 18:53:52 crc kubenswrapper[4680]: I0217 18:53:52.677213 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nxnr6" podStartSLOduration=3.762307961 podStartE2EDuration="9.677192548s" podCreationTimestamp="2026-02-17 18:53:43 +0000 UTC" firstStartedPulling="2026-02-17 18:53:45.571967657 +0000 UTC m=+4155.122866336" lastFinishedPulling="2026-02-17 18:53:51.486852214 +0000 UTC m=+4161.037750923" observedRunningTime="2026-02-17 18:53:52.667269682 +0000 UTC m=+4162.218168361" watchObservedRunningTime="2026-02-17 18:53:52.677192548 +0000 UTC m=+4162.228091227" Feb 17 18:53:54 crc kubenswrapper[4680]: I0217 18:53:54.077168 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nxnr6" Feb 17 18:53:54 crc kubenswrapper[4680]: I0217 18:53:54.078067 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nxnr6" Feb 17 18:53:55 crc kubenswrapper[4680]: I0217 18:53:55.123221 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nxnr6" podUID="6f19eb54-2f7f-4e5d-b77d-17c96534a66a" containerName="registry-server" probeResult="failure" output=< Feb 17 18:53:55 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 18:53:55 crc kubenswrapper[4680]: > Feb 17 18:54:05 crc kubenswrapper[4680]: I0217 18:54:05.119155 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nxnr6" podUID="6f19eb54-2f7f-4e5d-b77d-17c96534a66a" containerName="registry-server" probeResult="failure" output=< Feb 17 18:54:05 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 18:54:05 crc kubenswrapper[4680]: > Feb 17 18:54:14 crc kubenswrapper[4680]: I0217 18:54:14.121627 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nxnr6" Feb 17 18:54:14 crc kubenswrapper[4680]: I0217 18:54:14.189679 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nxnr6" Feb 17 18:54:14 crc kubenswrapper[4680]: I0217 18:54:14.935618 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nxnr6"] Feb 17 18:54:15 crc kubenswrapper[4680]: I0217 18:54:15.825134 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nxnr6" podUID="6f19eb54-2f7f-4e5d-b77d-17c96534a66a" containerName="registry-server" containerID="cri-o://ed9e0cbf2e4d9ba967e7fc5193bb1c4a412cf311fe6b7403f18a5acd3417ff5d" gracePeriod=2 Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.375065 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxnr6" Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.528747 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-utilities\") pod \"6f19eb54-2f7f-4e5d-b77d-17c96534a66a\" (UID: \"6f19eb54-2f7f-4e5d-b77d-17c96534a66a\") " Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.528950 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cm8n\" (UniqueName: \"kubernetes.io/projected/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-kube-api-access-7cm8n\") pod \"6f19eb54-2f7f-4e5d-b77d-17c96534a66a\" (UID: \"6f19eb54-2f7f-4e5d-b77d-17c96534a66a\") " Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.529113 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-catalog-content\") pod \"6f19eb54-2f7f-4e5d-b77d-17c96534a66a\" (UID: \"6f19eb54-2f7f-4e5d-b77d-17c96534a66a\") " Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.529999 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-utilities" (OuterVolumeSpecName: "utilities") pod "6f19eb54-2f7f-4e5d-b77d-17c96534a66a" (UID: "6f19eb54-2f7f-4e5d-b77d-17c96534a66a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.537585 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-kube-api-access-7cm8n" (OuterVolumeSpecName: "kube-api-access-7cm8n") pod "6f19eb54-2f7f-4e5d-b77d-17c96534a66a" (UID: "6f19eb54-2f7f-4e5d-b77d-17c96534a66a"). InnerVolumeSpecName "kube-api-access-7cm8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.631565 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.631600 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cm8n\" (UniqueName: \"kubernetes.io/projected/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-kube-api-access-7cm8n\") on node \"crc\" DevicePath \"\"" Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.661710 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f19eb54-2f7f-4e5d-b77d-17c96534a66a" (UID: "6f19eb54-2f7f-4e5d-b77d-17c96534a66a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.733218 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f19eb54-2f7f-4e5d-b77d-17c96534a66a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.841380 4680 generic.go:334] "Generic (PLEG): container finished" podID="6f19eb54-2f7f-4e5d-b77d-17c96534a66a" containerID="ed9e0cbf2e4d9ba967e7fc5193bb1c4a412cf311fe6b7403f18a5acd3417ff5d" exitCode=0 Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.841424 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxnr6" event={"ID":"6f19eb54-2f7f-4e5d-b77d-17c96534a66a","Type":"ContainerDied","Data":"ed9e0cbf2e4d9ba967e7fc5193bb1c4a412cf311fe6b7403f18a5acd3417ff5d"} Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.841456 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxnr6" event={"ID":"6f19eb54-2f7f-4e5d-b77d-17c96534a66a","Type":"ContainerDied","Data":"b804a493e408aed608baf70330206e51c2bb99c573127bd90e993d653820fd89"} Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.841475 4680 scope.go:117] "RemoveContainer" containerID="ed9e0cbf2e4d9ba967e7fc5193bb1c4a412cf311fe6b7403f18a5acd3417ff5d" Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.841478 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxnr6" Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.867361 4680 scope.go:117] "RemoveContainer" containerID="6721857dbc19f1fff06a64da7ff418f57fd8487841d33a131761842746f6c318" Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.890438 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nxnr6"] Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.901480 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nxnr6"] Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.908736 4680 scope.go:117] "RemoveContainer" containerID="d25a230fab1f4390273663f7357a8c2b39e9a9361c41df3e21f4ac90856ca2f5" Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.948117 4680 scope.go:117] "RemoveContainer" containerID="ed9e0cbf2e4d9ba967e7fc5193bb1c4a412cf311fe6b7403f18a5acd3417ff5d" Feb 17 18:54:16 crc kubenswrapper[4680]: E0217 18:54:16.951300 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed9e0cbf2e4d9ba967e7fc5193bb1c4a412cf311fe6b7403f18a5acd3417ff5d\": container with ID starting with ed9e0cbf2e4d9ba967e7fc5193bb1c4a412cf311fe6b7403f18a5acd3417ff5d not found: ID does not exist" containerID="ed9e0cbf2e4d9ba967e7fc5193bb1c4a412cf311fe6b7403f18a5acd3417ff5d" Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.951365 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9e0cbf2e4d9ba967e7fc5193bb1c4a412cf311fe6b7403f18a5acd3417ff5d"} err="failed to get container status \"ed9e0cbf2e4d9ba967e7fc5193bb1c4a412cf311fe6b7403f18a5acd3417ff5d\": rpc error: code = NotFound desc = could not find container \"ed9e0cbf2e4d9ba967e7fc5193bb1c4a412cf311fe6b7403f18a5acd3417ff5d\": container with ID starting with ed9e0cbf2e4d9ba967e7fc5193bb1c4a412cf311fe6b7403f18a5acd3417ff5d not found: ID does not exist" Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.951391 4680 scope.go:117] "RemoveContainer" containerID="6721857dbc19f1fff06a64da7ff418f57fd8487841d33a131761842746f6c318" Feb 17 18:54:16 crc kubenswrapper[4680]: E0217 18:54:16.951769 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6721857dbc19f1fff06a64da7ff418f57fd8487841d33a131761842746f6c318\": container with ID starting with 6721857dbc19f1fff06a64da7ff418f57fd8487841d33a131761842746f6c318 not found: ID does not exist" containerID="6721857dbc19f1fff06a64da7ff418f57fd8487841d33a131761842746f6c318" Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.951827 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6721857dbc19f1fff06a64da7ff418f57fd8487841d33a131761842746f6c318"} err="failed to get container status \"6721857dbc19f1fff06a64da7ff418f57fd8487841d33a131761842746f6c318\": rpc error: code = NotFound desc = could not find container \"6721857dbc19f1fff06a64da7ff418f57fd8487841d33a131761842746f6c318\": container with ID starting with 6721857dbc19f1fff06a64da7ff418f57fd8487841d33a131761842746f6c318 not found: ID does not exist" Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.951870 4680 scope.go:117] "RemoveContainer" containerID="d25a230fab1f4390273663f7357a8c2b39e9a9361c41df3e21f4ac90856ca2f5" Feb 17 18:54:16 crc kubenswrapper[4680]: E0217 18:54:16.952167 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d25a230fab1f4390273663f7357a8c2b39e9a9361c41df3e21f4ac90856ca2f5\": container with ID starting with d25a230fab1f4390273663f7357a8c2b39e9a9361c41df3e21f4ac90856ca2f5 not found: ID does not exist" containerID="d25a230fab1f4390273663f7357a8c2b39e9a9361c41df3e21f4ac90856ca2f5" Feb 17 18:54:16 crc kubenswrapper[4680]: I0217 18:54:16.952192 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d25a230fab1f4390273663f7357a8c2b39e9a9361c41df3e21f4ac90856ca2f5"} err="failed to get container status \"d25a230fab1f4390273663f7357a8c2b39e9a9361c41df3e21f4ac90856ca2f5\": rpc error: code = NotFound desc = could not find container \"d25a230fab1f4390273663f7357a8c2b39e9a9361c41df3e21f4ac90856ca2f5\": container with ID starting with d25a230fab1f4390273663f7357a8c2b39e9a9361c41df3e21f4ac90856ca2f5 not found: ID does not exist" Feb 17 18:54:17 crc kubenswrapper[4680]: I0217 18:54:17.118371 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f19eb54-2f7f-4e5d-b77d-17c96534a66a" path="/var/lib/kubelet/pods/6f19eb54-2f7f-4e5d-b77d-17c96534a66a/volumes" Feb 17 18:54:59 crc kubenswrapper[4680]: I0217 18:54:59.189341 4680 generic.go:334] "Generic (PLEG): container finished" podID="327c58a4-13db-4bf2-b6e4-755c23c2a36f" containerID="0d9da85c7ef970499b8dba2e2e88d3281ae90a76d93ded80046905a1300c3b51" exitCode=0 Feb 17 18:54:59 crc kubenswrapper[4680]: I0217 18:54:59.189470 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"327c58a4-13db-4bf2-b6e4-755c23c2a36f","Type":"ContainerDied","Data":"0d9da85c7ef970499b8dba2e2e88d3281ae90a76d93ded80046905a1300c3b51"} Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.603078 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.673027 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-ssh-key\") pod \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.673096 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/327c58a4-13db-4bf2-b6e4-755c23c2a36f-test-operator-ephemeral-temporary\") pod \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.673133 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-ca-certs\") pod \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.673164 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/327c58a4-13db-4bf2-b6e4-755c23c2a36f-test-operator-ephemeral-workdir\") pod \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.673295 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/327c58a4-13db-4bf2-b6e4-755c23c2a36f-openstack-config\") pod \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.673359 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.673403 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-openstack-config-secret\") pod \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.673433 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jplwl\" (UniqueName: \"kubernetes.io/projected/327c58a4-13db-4bf2-b6e4-755c23c2a36f-kube-api-access-jplwl\") pod \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.673460 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/327c58a4-13db-4bf2-b6e4-755c23c2a36f-config-data\") pod \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\" (UID: \"327c58a4-13db-4bf2-b6e4-755c23c2a36f\") " Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.674342 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/327c58a4-13db-4bf2-b6e4-755c23c2a36f-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "327c58a4-13db-4bf2-b6e4-755c23c2a36f" (UID: "327c58a4-13db-4bf2-b6e4-755c23c2a36f"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.676355 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/327c58a4-13db-4bf2-b6e4-755c23c2a36f-config-data" (OuterVolumeSpecName: "config-data") pod "327c58a4-13db-4bf2-b6e4-755c23c2a36f" (UID: "327c58a4-13db-4bf2-b6e4-755c23c2a36f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.681643 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/327c58a4-13db-4bf2-b6e4-755c23c2a36f-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "327c58a4-13db-4bf2-b6e4-755c23c2a36f" (UID: "327c58a4-13db-4bf2-b6e4-755c23c2a36f"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.682208 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "327c58a4-13db-4bf2-b6e4-755c23c2a36f" (UID: "327c58a4-13db-4bf2-b6e4-755c23c2a36f"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.690026 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/327c58a4-13db-4bf2-b6e4-755c23c2a36f-kube-api-access-jplwl" (OuterVolumeSpecName: "kube-api-access-jplwl") pod "327c58a4-13db-4bf2-b6e4-755c23c2a36f" (UID: "327c58a4-13db-4bf2-b6e4-755c23c2a36f"). InnerVolumeSpecName "kube-api-access-jplwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.703528 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "327c58a4-13db-4bf2-b6e4-755c23c2a36f" (UID: "327c58a4-13db-4bf2-b6e4-755c23c2a36f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.703788 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "327c58a4-13db-4bf2-b6e4-755c23c2a36f" (UID: "327c58a4-13db-4bf2-b6e4-755c23c2a36f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.716036 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "327c58a4-13db-4bf2-b6e4-755c23c2a36f" (UID: "327c58a4-13db-4bf2-b6e4-755c23c2a36f"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.742699 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/327c58a4-13db-4bf2-b6e4-755c23c2a36f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "327c58a4-13db-4bf2-b6e4-755c23c2a36f" (UID: "327c58a4-13db-4bf2-b6e4-755c23c2a36f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.776288 4680 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/327c58a4-13db-4bf2-b6e4-755c23c2a36f-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.776343 4680 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.776353 4680 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/327c58a4-13db-4bf2-b6e4-755c23c2a36f-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.776368 4680 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/327c58a4-13db-4bf2-b6e4-755c23c2a36f-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.776984 4680 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.777004 4680 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.777017 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jplwl\" (UniqueName: \"kubernetes.io/projected/327c58a4-13db-4bf2-b6e4-755c23c2a36f-kube-api-access-jplwl\") on node \"crc\" DevicePath \"\"" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.777028 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/327c58a4-13db-4bf2-b6e4-755c23c2a36f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.777036 4680 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/327c58a4-13db-4bf2-b6e4-755c23c2a36f-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.808465 4680 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 17 18:55:00 crc kubenswrapper[4680]: I0217 18:55:00.879485 4680 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 17 18:55:01 crc kubenswrapper[4680]: I0217 18:55:01.208425 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"327c58a4-13db-4bf2-b6e4-755c23c2a36f","Type":"ContainerDied","Data":"4d4c291692acc2369859ae51763708f4c15ce1b79cc25558629ad3f94173f586"} Feb 17 18:55:01 crc kubenswrapper[4680]: I0217 18:55:01.208483 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d4c291692acc2369859ae51763708f4c15ce1b79cc25558629ad3f94173f586" Feb 17 18:55:01 crc kubenswrapper[4680]: I0217 18:55:01.208790 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 17 18:55:03 crc kubenswrapper[4680]: I0217 18:55:03.786337 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 17 18:55:03 crc kubenswrapper[4680]: E0217 18:55:03.787433 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f19eb54-2f7f-4e5d-b77d-17c96534a66a" containerName="extract-content" Feb 17 18:55:03 crc kubenswrapper[4680]: I0217 18:55:03.787449 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f19eb54-2f7f-4e5d-b77d-17c96534a66a" containerName="extract-content" Feb 17 18:55:03 crc kubenswrapper[4680]: E0217 18:55:03.787463 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f19eb54-2f7f-4e5d-b77d-17c96534a66a" containerName="registry-server" Feb 17 18:55:03 crc kubenswrapper[4680]: I0217 18:55:03.787470 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f19eb54-2f7f-4e5d-b77d-17c96534a66a" containerName="registry-server" Feb 17 18:55:03 crc kubenswrapper[4680]: E0217 18:55:03.787491 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="327c58a4-13db-4bf2-b6e4-755c23c2a36f" containerName="tempest-tests-tempest-tests-runner" Feb 17 18:55:03 crc kubenswrapper[4680]: I0217 18:55:03.787498 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="327c58a4-13db-4bf2-b6e4-755c23c2a36f" containerName="tempest-tests-tempest-tests-runner" Feb 17 18:55:03 crc kubenswrapper[4680]: E0217 18:55:03.787508 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f19eb54-2f7f-4e5d-b77d-17c96534a66a" containerName="extract-utilities" Feb 17 18:55:03 crc kubenswrapper[4680]: I0217 18:55:03.787515 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f19eb54-2f7f-4e5d-b77d-17c96534a66a" containerName="extract-utilities" Feb 17 18:55:03 crc kubenswrapper[4680]: I0217 18:55:03.787682 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f19eb54-2f7f-4e5d-b77d-17c96534a66a" containerName="registry-server" Feb 17 18:55:03 crc kubenswrapper[4680]: I0217 18:55:03.787693 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="327c58a4-13db-4bf2-b6e4-755c23c2a36f" containerName="tempest-tests-tempest-tests-runner" Feb 17 18:55:03 crc kubenswrapper[4680]: I0217 18:55:03.788593 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 18:55:03 crc kubenswrapper[4680]: I0217 18:55:03.792567 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vvjj6" Feb 17 18:55:03 crc kubenswrapper[4680]: I0217 18:55:03.796904 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 17 18:55:03 crc kubenswrapper[4680]: I0217 18:55:03.937110 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8d217e59-e2e9-4cfa-a272-5cc6c543347f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 18:55:03 crc kubenswrapper[4680]: I0217 18:55:03.937251 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2mvs\" (UniqueName: \"kubernetes.io/projected/8d217e59-e2e9-4cfa-a272-5cc6c543347f-kube-api-access-g2mvs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8d217e59-e2e9-4cfa-a272-5cc6c543347f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 18:55:04 crc kubenswrapper[4680]: I0217 18:55:04.039589 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8d217e59-e2e9-4cfa-a272-5cc6c543347f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 18:55:04 crc kubenswrapper[4680]: I0217 18:55:04.039716 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2mvs\" (UniqueName: \"kubernetes.io/projected/8d217e59-e2e9-4cfa-a272-5cc6c543347f-kube-api-access-g2mvs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8d217e59-e2e9-4cfa-a272-5cc6c543347f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 18:55:04 crc kubenswrapper[4680]: I0217 18:55:04.041240 4680 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8d217e59-e2e9-4cfa-a272-5cc6c543347f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 18:55:04 crc kubenswrapper[4680]: I0217 18:55:04.059460 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2mvs\" (UniqueName: \"kubernetes.io/projected/8d217e59-e2e9-4cfa-a272-5cc6c543347f-kube-api-access-g2mvs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8d217e59-e2e9-4cfa-a272-5cc6c543347f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 18:55:04 crc kubenswrapper[4680]: I0217 18:55:04.064682 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8d217e59-e2e9-4cfa-a272-5cc6c543347f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 18:55:04 crc kubenswrapper[4680]: I0217 18:55:04.105522 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 18:55:04 crc kubenswrapper[4680]: I0217 18:55:04.572733 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 17 18:55:05 crc kubenswrapper[4680]: I0217 18:55:05.256951 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"8d217e59-e2e9-4cfa-a272-5cc6c543347f","Type":"ContainerStarted","Data":"ecb6b52d740c8a1aa967059f8f47143fdcbb8ae2ec7ab53ec34e7df9139794f7"} Feb 17 18:55:05 crc kubenswrapper[4680]: I0217 18:55:05.588995 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:55:05 crc kubenswrapper[4680]: I0217 18:55:05.589064 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:55:06 crc kubenswrapper[4680]: I0217 18:55:06.265587 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"8d217e59-e2e9-4cfa-a272-5cc6c543347f","Type":"ContainerStarted","Data":"af2d46cf8b43ababec8e74679377d0c6eafe8d072a8e50c6bc0b01ce64cbe4ac"} Feb 17 18:55:06 crc kubenswrapper[4680]: I0217 18:55:06.282899 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.208812638 podStartE2EDuration="3.28287952s" podCreationTimestamp="2026-02-17 18:55:03 +0000 UTC" firstStartedPulling="2026-02-17 18:55:04.585153128 +0000 UTC m=+4234.136051807" lastFinishedPulling="2026-02-17 18:55:05.65922001 +0000 UTC m=+4235.210118689" observedRunningTime="2026-02-17 18:55:06.27768265 +0000 UTC m=+4235.828581339" watchObservedRunningTime="2026-02-17 18:55:06.28287952 +0000 UTC m=+4235.833778189" Feb 17 18:55:26 crc kubenswrapper[4680]: I0217 18:55:26.703587 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jgmsc/must-gather-ktfst"] Feb 17 18:55:26 crc kubenswrapper[4680]: I0217 18:55:26.707751 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgmsc/must-gather-ktfst" Feb 17 18:55:26 crc kubenswrapper[4680]: I0217 18:55:26.711335 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jgmsc"/"default-dockercfg-gw9nw" Feb 17 18:55:26 crc kubenswrapper[4680]: I0217 18:55:26.711461 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jgmsc"/"kube-root-ca.crt" Feb 17 18:55:26 crc kubenswrapper[4680]: I0217 18:55:26.729147 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jgmsc/must-gather-ktfst"] Feb 17 18:55:26 crc kubenswrapper[4680]: I0217 18:55:26.729477 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jgmsc"/"openshift-service-ca.crt" Feb 17 18:55:26 crc kubenswrapper[4680]: I0217 18:55:26.824405 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prs2f\" (UniqueName: \"kubernetes.io/projected/c4ba2d8b-b186-4d49-adb1-4765e193e2b9-kube-api-access-prs2f\") pod \"must-gather-ktfst\" (UID: \"c4ba2d8b-b186-4d49-adb1-4765e193e2b9\") " pod="openshift-must-gather-jgmsc/must-gather-ktfst" Feb 17 18:55:26 crc kubenswrapper[4680]: I0217 18:55:26.824706 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c4ba2d8b-b186-4d49-adb1-4765e193e2b9-must-gather-output\") pod \"must-gather-ktfst\" (UID: \"c4ba2d8b-b186-4d49-adb1-4765e193e2b9\") " pod="openshift-must-gather-jgmsc/must-gather-ktfst" Feb 17 18:55:26 crc kubenswrapper[4680]: I0217 18:55:26.927077 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c4ba2d8b-b186-4d49-adb1-4765e193e2b9-must-gather-output\") pod \"must-gather-ktfst\" (UID: \"c4ba2d8b-b186-4d49-adb1-4765e193e2b9\") " pod="openshift-must-gather-jgmsc/must-gather-ktfst" Feb 17 18:55:26 crc kubenswrapper[4680]: I0217 18:55:26.927228 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prs2f\" (UniqueName: \"kubernetes.io/projected/c4ba2d8b-b186-4d49-adb1-4765e193e2b9-kube-api-access-prs2f\") pod \"must-gather-ktfst\" (UID: \"c4ba2d8b-b186-4d49-adb1-4765e193e2b9\") " pod="openshift-must-gather-jgmsc/must-gather-ktfst" Feb 17 18:55:26 crc kubenswrapper[4680]: I0217 18:55:26.927666 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c4ba2d8b-b186-4d49-adb1-4765e193e2b9-must-gather-output\") pod \"must-gather-ktfst\" (UID: \"c4ba2d8b-b186-4d49-adb1-4765e193e2b9\") " pod="openshift-must-gather-jgmsc/must-gather-ktfst" Feb 17 18:55:26 crc kubenswrapper[4680]: I0217 18:55:26.950289 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prs2f\" (UniqueName: \"kubernetes.io/projected/c4ba2d8b-b186-4d49-adb1-4765e193e2b9-kube-api-access-prs2f\") pod \"must-gather-ktfst\" (UID: \"c4ba2d8b-b186-4d49-adb1-4765e193e2b9\") " pod="openshift-must-gather-jgmsc/must-gather-ktfst" Feb 17 18:55:27 crc kubenswrapper[4680]: I0217 18:55:27.025187 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgmsc/must-gather-ktfst" Feb 17 18:55:27 crc kubenswrapper[4680]: I0217 18:55:27.560203 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jgmsc/must-gather-ktfst"] Feb 17 18:55:28 crc kubenswrapper[4680]: I0217 18:55:28.474920 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgmsc/must-gather-ktfst" event={"ID":"c4ba2d8b-b186-4d49-adb1-4765e193e2b9","Type":"ContainerStarted","Data":"6ed151d90790996ce42d11bb4fa85edcf7f37ae8e7ebe79d7b3b70caf6149561"} Feb 17 18:55:34 crc kubenswrapper[4680]: I0217 18:55:34.542557 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgmsc/must-gather-ktfst" event={"ID":"c4ba2d8b-b186-4d49-adb1-4765e193e2b9","Type":"ContainerStarted","Data":"b0447f1a5739910acb2b5292456c1bc2cf47998d7e46fb592e22abe4b2455d31"} Feb 17 18:55:35 crc kubenswrapper[4680]: I0217 18:55:35.552389 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgmsc/must-gather-ktfst" event={"ID":"c4ba2d8b-b186-4d49-adb1-4765e193e2b9","Type":"ContainerStarted","Data":"7caccfabb95521042eb8da64057fb2f2377ee36c0d2f7ec34e2bc3a2285c6b70"} Feb 17 18:55:35 crc kubenswrapper[4680]: I0217 18:55:35.588823 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:55:35 crc kubenswrapper[4680]: I0217 18:55:35.588893 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:55:40 crc kubenswrapper[4680]: I0217 18:55:40.323789 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jgmsc/must-gather-ktfst" podStartSLOduration=7.560957152 podStartE2EDuration="14.323765427s" podCreationTimestamp="2026-02-17 18:55:26 +0000 UTC" firstStartedPulling="2026-02-17 18:55:27.545415075 +0000 UTC m=+4257.096313754" lastFinishedPulling="2026-02-17 18:55:34.30822334 +0000 UTC m=+4263.859122029" observedRunningTime="2026-02-17 18:55:35.569811968 +0000 UTC m=+4265.120710657" watchObservedRunningTime="2026-02-17 18:55:40.323765427 +0000 UTC m=+4269.874664116" Feb 17 18:55:40 crc kubenswrapper[4680]: I0217 18:55:40.325297 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jgmsc/crc-debug-4bl9j"] Feb 17 18:55:40 crc kubenswrapper[4680]: I0217 18:55:40.326683 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgmsc/crc-debug-4bl9j" Feb 17 18:55:40 crc kubenswrapper[4680]: I0217 18:55:40.389887 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6cwx\" (UniqueName: \"kubernetes.io/projected/c901b818-2de4-408f-9a15-82900bece549-kube-api-access-n6cwx\") pod \"crc-debug-4bl9j\" (UID: \"c901b818-2de4-408f-9a15-82900bece549\") " pod="openshift-must-gather-jgmsc/crc-debug-4bl9j" Feb 17 18:55:40 crc kubenswrapper[4680]: I0217 18:55:40.390268 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c901b818-2de4-408f-9a15-82900bece549-host\") pod \"crc-debug-4bl9j\" (UID: \"c901b818-2de4-408f-9a15-82900bece549\") " pod="openshift-must-gather-jgmsc/crc-debug-4bl9j" Feb 17 18:55:40 crc kubenswrapper[4680]: I0217 18:55:40.491669 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c901b818-2de4-408f-9a15-82900bece549-host\") pod \"crc-debug-4bl9j\" (UID: \"c901b818-2de4-408f-9a15-82900bece549\") " pod="openshift-must-gather-jgmsc/crc-debug-4bl9j" Feb 17 18:55:40 crc kubenswrapper[4680]: I0217 18:55:40.491748 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6cwx\" (UniqueName: \"kubernetes.io/projected/c901b818-2de4-408f-9a15-82900bece549-kube-api-access-n6cwx\") pod \"crc-debug-4bl9j\" (UID: \"c901b818-2de4-408f-9a15-82900bece549\") " pod="openshift-must-gather-jgmsc/crc-debug-4bl9j" Feb 17 18:55:40 crc kubenswrapper[4680]: I0217 18:55:40.491785 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c901b818-2de4-408f-9a15-82900bece549-host\") pod \"crc-debug-4bl9j\" (UID: \"c901b818-2de4-408f-9a15-82900bece549\") " pod="openshift-must-gather-jgmsc/crc-debug-4bl9j" Feb 17 18:55:40 crc kubenswrapper[4680]: I0217 18:55:40.530547 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6cwx\" (UniqueName: \"kubernetes.io/projected/c901b818-2de4-408f-9a15-82900bece549-kube-api-access-n6cwx\") pod \"crc-debug-4bl9j\" (UID: \"c901b818-2de4-408f-9a15-82900bece549\") " pod="openshift-must-gather-jgmsc/crc-debug-4bl9j" Feb 17 18:55:40 crc kubenswrapper[4680]: I0217 18:55:40.646991 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgmsc/crc-debug-4bl9j" Feb 17 18:55:41 crc kubenswrapper[4680]: I0217 18:55:41.600568 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgmsc/crc-debug-4bl9j" event={"ID":"c901b818-2de4-408f-9a15-82900bece549","Type":"ContainerStarted","Data":"ebf188f34d4fc14870ba94865ee60a98ac519c3f5cefd3f60122d46921e7515d"} Feb 17 18:55:51 crc kubenswrapper[4680]: I0217 18:55:51.699625 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgmsc/crc-debug-4bl9j" event={"ID":"c901b818-2de4-408f-9a15-82900bece549","Type":"ContainerStarted","Data":"06eaa0ccee906935d3448120dc34c47b8163ed502cfe76cc10927f5972fe8e79"} Feb 17 18:55:51 crc kubenswrapper[4680]: I0217 18:55:51.727488 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jgmsc/crc-debug-4bl9j" podStartSLOduration=1.6459897620000001 podStartE2EDuration="11.727465066s" podCreationTimestamp="2026-02-17 18:55:40 +0000 UTC" firstStartedPulling="2026-02-17 18:55:40.747005023 +0000 UTC m=+4270.297903702" lastFinishedPulling="2026-02-17 18:55:50.828480327 +0000 UTC m=+4280.379379006" observedRunningTime="2026-02-17 18:55:51.712148475 +0000 UTC m=+4281.263047154" watchObservedRunningTime="2026-02-17 18:55:51.727465066 +0000 UTC m=+4281.278363745" Feb 17 18:56:05 crc kubenswrapper[4680]: I0217 18:56:05.589111 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 18:56:05 crc kubenswrapper[4680]: I0217 18:56:05.589785 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 18:56:05 crc kubenswrapper[4680]: I0217 18:56:05.589829 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 18:56:05 crc kubenswrapper[4680]: I0217 18:56:05.590356 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 18:56:05 crc kubenswrapper[4680]: I0217 18:56:05.590402 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" gracePeriod=600 Feb 17 18:56:05 crc kubenswrapper[4680]: E0217 18:56:05.716444 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:56:05 crc kubenswrapper[4680]: I0217 18:56:05.825699 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" exitCode=0 Feb 17 18:56:05 crc kubenswrapper[4680]: I0217 18:56:05.825744 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151"} Feb 17 18:56:05 crc kubenswrapper[4680]: I0217 18:56:05.825781 4680 scope.go:117] "RemoveContainer" containerID="c2588e31b0a321d94a85ae0410d3d021deaa574d637ba38f421c735456e240e7" Feb 17 18:56:05 crc kubenswrapper[4680]: I0217 18:56:05.826472 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:56:05 crc kubenswrapper[4680]: E0217 18:56:05.826709 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:56:21 crc kubenswrapper[4680]: I0217 18:56:21.119682 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:56:21 crc kubenswrapper[4680]: E0217 18:56:21.124613 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:56:33 crc kubenswrapper[4680]: I0217 18:56:33.105123 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:56:33 crc kubenswrapper[4680]: E0217 18:56:33.105933 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:56:38 crc kubenswrapper[4680]: I0217 18:56:38.105965 4680 generic.go:334] "Generic (PLEG): container finished" podID="c901b818-2de4-408f-9a15-82900bece549" containerID="06eaa0ccee906935d3448120dc34c47b8163ed502cfe76cc10927f5972fe8e79" exitCode=0 Feb 17 18:56:38 crc kubenswrapper[4680]: I0217 18:56:38.106040 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgmsc/crc-debug-4bl9j" event={"ID":"c901b818-2de4-408f-9a15-82900bece549","Type":"ContainerDied","Data":"06eaa0ccee906935d3448120dc34c47b8163ed502cfe76cc10927f5972fe8e79"} Feb 17 18:56:39 crc kubenswrapper[4680]: I0217 18:56:39.237933 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgmsc/crc-debug-4bl9j" Feb 17 18:56:39 crc kubenswrapper[4680]: I0217 18:56:39.272480 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jgmsc/crc-debug-4bl9j"] Feb 17 18:56:39 crc kubenswrapper[4680]: I0217 18:56:39.281270 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jgmsc/crc-debug-4bl9j"] Feb 17 18:56:39 crc kubenswrapper[4680]: I0217 18:56:39.413998 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c901b818-2de4-408f-9a15-82900bece549-host\") pod \"c901b818-2de4-408f-9a15-82900bece549\" (UID: \"c901b818-2de4-408f-9a15-82900bece549\") " Feb 17 18:56:39 crc kubenswrapper[4680]: I0217 18:56:39.414094 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6cwx\" (UniqueName: \"kubernetes.io/projected/c901b818-2de4-408f-9a15-82900bece549-kube-api-access-n6cwx\") pod \"c901b818-2de4-408f-9a15-82900bece549\" (UID: \"c901b818-2de4-408f-9a15-82900bece549\") " Feb 17 18:56:39 crc kubenswrapper[4680]: I0217 18:56:39.414104 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c901b818-2de4-408f-9a15-82900bece549-host" (OuterVolumeSpecName: "host") pod "c901b818-2de4-408f-9a15-82900bece549" (UID: "c901b818-2de4-408f-9a15-82900bece549"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 18:56:39 crc kubenswrapper[4680]: I0217 18:56:39.414864 4680 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c901b818-2de4-408f-9a15-82900bece549-host\") on node \"crc\" DevicePath \"\"" Feb 17 18:56:39 crc kubenswrapper[4680]: I0217 18:56:39.420629 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c901b818-2de4-408f-9a15-82900bece549-kube-api-access-n6cwx" (OuterVolumeSpecName: "kube-api-access-n6cwx") pod "c901b818-2de4-408f-9a15-82900bece549" (UID: "c901b818-2de4-408f-9a15-82900bece549"). InnerVolumeSpecName "kube-api-access-n6cwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:56:39 crc kubenswrapper[4680]: I0217 18:56:39.516830 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6cwx\" (UniqueName: \"kubernetes.io/projected/c901b818-2de4-408f-9a15-82900bece549-kube-api-access-n6cwx\") on node \"crc\" DevicePath \"\"" Feb 17 18:56:40 crc kubenswrapper[4680]: I0217 18:56:40.122255 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebf188f34d4fc14870ba94865ee60a98ac519c3f5cefd3f60122d46921e7515d" Feb 17 18:56:40 crc kubenswrapper[4680]: I0217 18:56:40.122349 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgmsc/crc-debug-4bl9j" Feb 17 18:56:40 crc kubenswrapper[4680]: I0217 18:56:40.459032 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jgmsc/crc-debug-r5ckw"] Feb 17 18:56:40 crc kubenswrapper[4680]: E0217 18:56:40.459437 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c901b818-2de4-408f-9a15-82900bece549" containerName="container-00" Feb 17 18:56:40 crc kubenswrapper[4680]: I0217 18:56:40.459448 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c901b818-2de4-408f-9a15-82900bece549" containerName="container-00" Feb 17 18:56:40 crc kubenswrapper[4680]: I0217 18:56:40.459632 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="c901b818-2de4-408f-9a15-82900bece549" containerName="container-00" Feb 17 18:56:40 crc kubenswrapper[4680]: I0217 18:56:40.460233 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgmsc/crc-debug-r5ckw" Feb 17 18:56:40 crc kubenswrapper[4680]: I0217 18:56:40.637525 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmkxq\" (UniqueName: \"kubernetes.io/projected/f0e5385d-e444-4677-860e-90d69762ad36-kube-api-access-cmkxq\") pod \"crc-debug-r5ckw\" (UID: \"f0e5385d-e444-4677-860e-90d69762ad36\") " pod="openshift-must-gather-jgmsc/crc-debug-r5ckw" Feb 17 18:56:40 crc kubenswrapper[4680]: I0217 18:56:40.637656 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0e5385d-e444-4677-860e-90d69762ad36-host\") pod \"crc-debug-r5ckw\" (UID: \"f0e5385d-e444-4677-860e-90d69762ad36\") " pod="openshift-must-gather-jgmsc/crc-debug-r5ckw" Feb 17 18:56:40 crc kubenswrapper[4680]: I0217 18:56:40.738844 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0e5385d-e444-4677-860e-90d69762ad36-host\") pod \"crc-debug-r5ckw\" (UID: \"f0e5385d-e444-4677-860e-90d69762ad36\") " pod="openshift-must-gather-jgmsc/crc-debug-r5ckw" Feb 17 18:56:40 crc kubenswrapper[4680]: I0217 18:56:40.739000 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0e5385d-e444-4677-860e-90d69762ad36-host\") pod \"crc-debug-r5ckw\" (UID: \"f0e5385d-e444-4677-860e-90d69762ad36\") " pod="openshift-must-gather-jgmsc/crc-debug-r5ckw" Feb 17 18:56:40 crc kubenswrapper[4680]: I0217 18:56:40.738987 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmkxq\" (UniqueName: \"kubernetes.io/projected/f0e5385d-e444-4677-860e-90d69762ad36-kube-api-access-cmkxq\") pod \"crc-debug-r5ckw\" (UID: \"f0e5385d-e444-4677-860e-90d69762ad36\") " pod="openshift-must-gather-jgmsc/crc-debug-r5ckw" Feb 17 18:56:40 crc kubenswrapper[4680]: I0217 18:56:40.760683 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmkxq\" (UniqueName: \"kubernetes.io/projected/f0e5385d-e444-4677-860e-90d69762ad36-kube-api-access-cmkxq\") pod \"crc-debug-r5ckw\" (UID: \"f0e5385d-e444-4677-860e-90d69762ad36\") " pod="openshift-must-gather-jgmsc/crc-debug-r5ckw" Feb 17 18:56:40 crc kubenswrapper[4680]: I0217 18:56:40.781134 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgmsc/crc-debug-r5ckw" Feb 17 18:56:41 crc kubenswrapper[4680]: I0217 18:56:41.115228 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c901b818-2de4-408f-9a15-82900bece549" path="/var/lib/kubelet/pods/c901b818-2de4-408f-9a15-82900bece549/volumes" Feb 17 18:56:41 crc kubenswrapper[4680]: I0217 18:56:41.131330 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgmsc/crc-debug-r5ckw" event={"ID":"f0e5385d-e444-4677-860e-90d69762ad36","Type":"ContainerStarted","Data":"1e228a3b19ce194798c8b0d28127575ecdf1f6222816ffec1245b9de5e5f1433"} Feb 17 18:56:41 crc kubenswrapper[4680]: I0217 18:56:41.131368 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgmsc/crc-debug-r5ckw" event={"ID":"f0e5385d-e444-4677-860e-90d69762ad36","Type":"ContainerStarted","Data":"60c3dc7f2e5f3fc732ae55b282e7dbccd950f5f92fa120e81d762248684450bb"} Feb 17 18:56:41 crc kubenswrapper[4680]: I0217 18:56:41.149869 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jgmsc/crc-debug-r5ckw" podStartSLOduration=1.149847165 podStartE2EDuration="1.149847165s" podCreationTimestamp="2026-02-17 18:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 18:56:41.141640212 +0000 UTC m=+4330.692538891" watchObservedRunningTime="2026-02-17 18:56:41.149847165 +0000 UTC m=+4330.700745844" Feb 17 18:56:42 crc kubenswrapper[4680]: I0217 18:56:42.141379 4680 generic.go:334] "Generic (PLEG): container finished" podID="f0e5385d-e444-4677-860e-90d69762ad36" containerID="1e228a3b19ce194798c8b0d28127575ecdf1f6222816ffec1245b9de5e5f1433" exitCode=0 Feb 17 18:56:42 crc kubenswrapper[4680]: I0217 18:56:42.141589 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgmsc/crc-debug-r5ckw" event={"ID":"f0e5385d-e444-4677-860e-90d69762ad36","Type":"ContainerDied","Data":"1e228a3b19ce194798c8b0d28127575ecdf1f6222816ffec1245b9de5e5f1433"} Feb 17 18:56:43 crc kubenswrapper[4680]: I0217 18:56:43.261184 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgmsc/crc-debug-r5ckw" Feb 17 18:56:43 crc kubenswrapper[4680]: I0217 18:56:43.383459 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jgmsc/crc-debug-r5ckw"] Feb 17 18:56:43 crc kubenswrapper[4680]: I0217 18:56:43.385583 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0e5385d-e444-4677-860e-90d69762ad36-host\") pod \"f0e5385d-e444-4677-860e-90d69762ad36\" (UID: \"f0e5385d-e444-4677-860e-90d69762ad36\") " Feb 17 18:56:43 crc kubenswrapper[4680]: I0217 18:56:43.385743 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmkxq\" (UniqueName: \"kubernetes.io/projected/f0e5385d-e444-4677-860e-90d69762ad36-kube-api-access-cmkxq\") pod \"f0e5385d-e444-4677-860e-90d69762ad36\" (UID: \"f0e5385d-e444-4677-860e-90d69762ad36\") " Feb 17 18:56:43 crc kubenswrapper[4680]: I0217 18:56:43.387247 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e5385d-e444-4677-860e-90d69762ad36-host" (OuterVolumeSpecName: "host") pod "f0e5385d-e444-4677-860e-90d69762ad36" (UID: "f0e5385d-e444-4677-860e-90d69762ad36"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 18:56:43 crc kubenswrapper[4680]: I0217 18:56:43.392752 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jgmsc/crc-debug-r5ckw"] Feb 17 18:56:43 crc kubenswrapper[4680]: I0217 18:56:43.393180 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e5385d-e444-4677-860e-90d69762ad36-kube-api-access-cmkxq" (OuterVolumeSpecName: "kube-api-access-cmkxq") pod "f0e5385d-e444-4677-860e-90d69762ad36" (UID: "f0e5385d-e444-4677-860e-90d69762ad36"). InnerVolumeSpecName "kube-api-access-cmkxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:56:43 crc kubenswrapper[4680]: I0217 18:56:43.487564 4680 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0e5385d-e444-4677-860e-90d69762ad36-host\") on node \"crc\" DevicePath \"\"" Feb 17 18:56:43 crc kubenswrapper[4680]: I0217 18:56:43.487604 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmkxq\" (UniqueName: \"kubernetes.io/projected/f0e5385d-e444-4677-860e-90d69762ad36-kube-api-access-cmkxq\") on node \"crc\" DevicePath \"\"" Feb 17 18:56:44 crc kubenswrapper[4680]: I0217 18:56:44.105807 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:56:44 crc kubenswrapper[4680]: E0217 18:56:44.106013 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:56:44 crc kubenswrapper[4680]: I0217 18:56:44.157515 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60c3dc7f2e5f3fc732ae55b282e7dbccd950f5f92fa120e81d762248684450bb" Feb 17 18:56:44 crc kubenswrapper[4680]: I0217 18:56:44.157583 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgmsc/crc-debug-r5ckw" Feb 17 18:56:44 crc kubenswrapper[4680]: I0217 18:56:44.642804 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jgmsc/crc-debug-dxjj8"] Feb 17 18:56:44 crc kubenswrapper[4680]: E0217 18:56:44.643735 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e5385d-e444-4677-860e-90d69762ad36" containerName="container-00" Feb 17 18:56:44 crc kubenswrapper[4680]: I0217 18:56:44.643768 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e5385d-e444-4677-860e-90d69762ad36" containerName="container-00" Feb 17 18:56:44 crc kubenswrapper[4680]: I0217 18:56:44.643979 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e5385d-e444-4677-860e-90d69762ad36" containerName="container-00" Feb 17 18:56:44 crc kubenswrapper[4680]: I0217 18:56:44.644772 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgmsc/crc-debug-dxjj8" Feb 17 18:56:44 crc kubenswrapper[4680]: I0217 18:56:44.811286 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rfst\" (UniqueName: \"kubernetes.io/projected/6f50257e-c809-48dd-ac1f-7ba9694d0e64-kube-api-access-4rfst\") pod \"crc-debug-dxjj8\" (UID: \"6f50257e-c809-48dd-ac1f-7ba9694d0e64\") " pod="openshift-must-gather-jgmsc/crc-debug-dxjj8" Feb 17 18:56:44 crc kubenswrapper[4680]: I0217 18:56:44.811351 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f50257e-c809-48dd-ac1f-7ba9694d0e64-host\") pod \"crc-debug-dxjj8\" (UID: \"6f50257e-c809-48dd-ac1f-7ba9694d0e64\") " pod="openshift-must-gather-jgmsc/crc-debug-dxjj8" Feb 17 18:56:44 crc kubenswrapper[4680]: I0217 18:56:44.913775 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rfst\" (UniqueName: \"kubernetes.io/projected/6f50257e-c809-48dd-ac1f-7ba9694d0e64-kube-api-access-4rfst\") pod \"crc-debug-dxjj8\" (UID: \"6f50257e-c809-48dd-ac1f-7ba9694d0e64\") " pod="openshift-must-gather-jgmsc/crc-debug-dxjj8" Feb 17 18:56:44 crc kubenswrapper[4680]: I0217 18:56:44.913858 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f50257e-c809-48dd-ac1f-7ba9694d0e64-host\") pod \"crc-debug-dxjj8\" (UID: \"6f50257e-c809-48dd-ac1f-7ba9694d0e64\") " pod="openshift-must-gather-jgmsc/crc-debug-dxjj8" Feb 17 18:56:44 crc kubenswrapper[4680]: I0217 18:56:44.914044 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f50257e-c809-48dd-ac1f-7ba9694d0e64-host\") pod \"crc-debug-dxjj8\" (UID: \"6f50257e-c809-48dd-ac1f-7ba9694d0e64\") " pod="openshift-must-gather-jgmsc/crc-debug-dxjj8" Feb 17 18:56:44 crc kubenswrapper[4680]: I0217 18:56:44.935270 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rfst\" (UniqueName: \"kubernetes.io/projected/6f50257e-c809-48dd-ac1f-7ba9694d0e64-kube-api-access-4rfst\") pod \"crc-debug-dxjj8\" (UID: \"6f50257e-c809-48dd-ac1f-7ba9694d0e64\") " pod="openshift-must-gather-jgmsc/crc-debug-dxjj8" Feb 17 18:56:44 crc kubenswrapper[4680]: I0217 18:56:44.959205 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgmsc/crc-debug-dxjj8" Feb 17 18:56:45 crc kubenswrapper[4680]: I0217 18:56:45.116312 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e5385d-e444-4677-860e-90d69762ad36" path="/var/lib/kubelet/pods/f0e5385d-e444-4677-860e-90d69762ad36/volumes" Feb 17 18:56:45 crc kubenswrapper[4680]: I0217 18:56:45.168892 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgmsc/crc-debug-dxjj8" event={"ID":"6f50257e-c809-48dd-ac1f-7ba9694d0e64","Type":"ContainerStarted","Data":"2be9d66a13e653ba40d44685a186135dc8fb3ae4e73b1cabbb989061e40dda2b"} Feb 17 18:56:46 crc kubenswrapper[4680]: I0217 18:56:46.178036 4680 generic.go:334] "Generic (PLEG): container finished" podID="6f50257e-c809-48dd-ac1f-7ba9694d0e64" containerID="5511c2ef7cbfc8e710994ee4220d8020d0b0aa3c30011a11df8ea9fb39040baf" exitCode=0 Feb 17 18:56:46 crc kubenswrapper[4680]: I0217 18:56:46.178131 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgmsc/crc-debug-dxjj8" event={"ID":"6f50257e-c809-48dd-ac1f-7ba9694d0e64","Type":"ContainerDied","Data":"5511c2ef7cbfc8e710994ee4220d8020d0b0aa3c30011a11df8ea9fb39040baf"} Feb 17 18:56:46 crc kubenswrapper[4680]: I0217 18:56:46.209026 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jgmsc/crc-debug-dxjj8"] Feb 17 18:56:46 crc kubenswrapper[4680]: I0217 18:56:46.216751 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jgmsc/crc-debug-dxjj8"] Feb 17 18:56:47 crc kubenswrapper[4680]: I0217 18:56:47.297668 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgmsc/crc-debug-dxjj8" Feb 17 18:56:47 crc kubenswrapper[4680]: I0217 18:56:47.367717 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f50257e-c809-48dd-ac1f-7ba9694d0e64-host\") pod \"6f50257e-c809-48dd-ac1f-7ba9694d0e64\" (UID: \"6f50257e-c809-48dd-ac1f-7ba9694d0e64\") " Feb 17 18:56:47 crc kubenswrapper[4680]: I0217 18:56:47.367915 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rfst\" (UniqueName: \"kubernetes.io/projected/6f50257e-c809-48dd-ac1f-7ba9694d0e64-kube-api-access-4rfst\") pod \"6f50257e-c809-48dd-ac1f-7ba9694d0e64\" (UID: \"6f50257e-c809-48dd-ac1f-7ba9694d0e64\") " Feb 17 18:56:47 crc kubenswrapper[4680]: I0217 18:56:47.369417 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f50257e-c809-48dd-ac1f-7ba9694d0e64-host" (OuterVolumeSpecName: "host") pod "6f50257e-c809-48dd-ac1f-7ba9694d0e64" (UID: "6f50257e-c809-48dd-ac1f-7ba9694d0e64"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 18:56:47 crc kubenswrapper[4680]: I0217 18:56:47.374435 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f50257e-c809-48dd-ac1f-7ba9694d0e64-kube-api-access-4rfst" (OuterVolumeSpecName: "kube-api-access-4rfst") pod "6f50257e-c809-48dd-ac1f-7ba9694d0e64" (UID: "6f50257e-c809-48dd-ac1f-7ba9694d0e64"). InnerVolumeSpecName "kube-api-access-4rfst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:56:47 crc kubenswrapper[4680]: I0217 18:56:47.470453 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rfst\" (UniqueName: \"kubernetes.io/projected/6f50257e-c809-48dd-ac1f-7ba9694d0e64-kube-api-access-4rfst\") on node \"crc\" DevicePath \"\"" Feb 17 18:56:47 crc kubenswrapper[4680]: I0217 18:56:47.470490 4680 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f50257e-c809-48dd-ac1f-7ba9694d0e64-host\") on node \"crc\" DevicePath \"\"" Feb 17 18:56:48 crc kubenswrapper[4680]: I0217 18:56:48.196736 4680 scope.go:117] "RemoveContainer" containerID="5511c2ef7cbfc8e710994ee4220d8020d0b0aa3c30011a11df8ea9fb39040baf" Feb 17 18:56:48 crc kubenswrapper[4680]: I0217 18:56:48.196886 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgmsc/crc-debug-dxjj8" Feb 17 18:56:49 crc kubenswrapper[4680]: I0217 18:56:49.114931 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f50257e-c809-48dd-ac1f-7ba9694d0e64" path="/var/lib/kubelet/pods/6f50257e-c809-48dd-ac1f-7ba9694d0e64/volumes" Feb 17 18:56:57 crc kubenswrapper[4680]: I0217 18:56:57.106551 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:56:57 crc kubenswrapper[4680]: E0217 18:56:57.107401 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:57:08 crc kubenswrapper[4680]: I0217 18:57:08.105778 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:57:08 crc kubenswrapper[4680]: E0217 18:57:08.106682 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:57:15 crc kubenswrapper[4680]: I0217 18:57:15.666667 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6dc475ddf4-vbkll_a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf/barbican-api/0.log" Feb 17 18:57:15 crc kubenswrapper[4680]: I0217 18:57:15.736934 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6dc475ddf4-vbkll_a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf/barbican-api-log/0.log" Feb 17 18:57:15 crc kubenswrapper[4680]: I0217 18:57:15.885618 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8474964c54-bvtdt_27ae5442-486e-406a-ba33-bfcbe792792f/barbican-keystone-listener-log/0.log" Feb 17 18:57:15 crc kubenswrapper[4680]: I0217 18:57:15.920390 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8474964c54-bvtdt_27ae5442-486e-406a-ba33-bfcbe792792f/barbican-keystone-listener/0.log" Feb 17 18:57:16 crc kubenswrapper[4680]: I0217 18:57:16.036894 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6b7bf9567f-k8kfh_6ccb1c12-6633-49c0-879c-5eaa29017991/barbican-worker/0.log" Feb 17 18:57:16 crc kubenswrapper[4680]: I0217 18:57:16.130753 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6b7bf9567f-k8kfh_6ccb1c12-6633-49c0-879c-5eaa29017991/barbican-worker-log/0.log" Feb 17 18:57:16 crc kubenswrapper[4680]: I0217 18:57:16.363780 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd_756e6a23-4101-4886-bfe8-cc4f12799fd7/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 18:57:16 crc kubenswrapper[4680]: I0217 18:57:16.598573 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7c4b1bad-b08f-4697-a925-b3b943394bdb/ceilometer-central-agent/0.log" Feb 17 18:57:16 crc kubenswrapper[4680]: I0217 18:57:16.653246 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7c4b1bad-b08f-4697-a925-b3b943394bdb/ceilometer-notification-agent/0.log" Feb 17 18:57:16 crc kubenswrapper[4680]: I0217 18:57:16.684760 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7c4b1bad-b08f-4697-a925-b3b943394bdb/proxy-httpd/0.log" Feb 17 18:57:16 crc kubenswrapper[4680]: I0217 18:57:16.789254 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7c4b1bad-b08f-4697-a925-b3b943394bdb/sg-core/0.log" Feb 17 18:57:16 crc kubenswrapper[4680]: I0217 18:57:16.945388 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_878b21ac-092c-4e88-ba08-f59d0b220d79/cinder-api/0.log" Feb 17 18:57:16 crc kubenswrapper[4680]: I0217 18:57:16.979264 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_878b21ac-092c-4e88-ba08-f59d0b220d79/cinder-api-log/0.log" Feb 17 18:57:17 crc kubenswrapper[4680]: I0217 18:57:17.170274 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7171f050-7893-4d68-b65a-9425776b1301/cinder-scheduler/0.log" Feb 17 18:57:17 crc kubenswrapper[4680]: I0217 18:57:17.194384 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7171f050-7893-4d68-b65a-9425776b1301/probe/0.log" Feb 17 18:57:17 crc kubenswrapper[4680]: I0217 18:57:17.306638 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf_05d9615f-35c4-4fcc-ac20-b798ba6abc72/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 18:57:17 crc kubenswrapper[4680]: I0217 18:57:17.438635 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7_44aba16b-dafd-4ce2-8f13-01e7bb14a109/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 18:57:17 crc kubenswrapper[4680]: I0217 18:57:17.622037 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-hwrrl_44cbaec4-7f81-44c8-80f7-02818330e08f/init/0.log" Feb 17 18:57:17 crc kubenswrapper[4680]: I0217 18:57:17.782503 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-hwrrl_44cbaec4-7f81-44c8-80f7-02818330e08f/init/0.log" Feb 17 18:57:17 crc kubenswrapper[4680]: I0217 18:57:17.897100 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-sstxg_fc6f1259-7540-444b-997b-4bfdddbc6977/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 18:57:17 crc kubenswrapper[4680]: I0217 18:57:17.966346 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-hwrrl_44cbaec4-7f81-44c8-80f7-02818330e08f/dnsmasq-dns/0.log" Feb 17 18:57:18 crc kubenswrapper[4680]: I0217 18:57:18.090980 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4d07a220-fd50-489d-9df4-ba9ed7ce2c95/glance-httpd/0.log" Feb 17 18:57:18 crc kubenswrapper[4680]: I0217 18:57:18.130642 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4d07a220-fd50-489d-9df4-ba9ed7ce2c95/glance-log/0.log" Feb 17 18:57:18 crc kubenswrapper[4680]: I0217 18:57:18.327766 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c/glance-log/0.log" Feb 17 18:57:18 crc kubenswrapper[4680]: I0217 18:57:18.356039 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c/glance-httpd/0.log" Feb 17 18:57:18 crc kubenswrapper[4680]: I0217 18:57:18.655595 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6978d4c7cb-4k7vb_91980b55-1251-4404-a22d-1572dd91ec8a/horizon/1.log" Feb 17 18:57:18 crc kubenswrapper[4680]: I0217 18:57:18.727930 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6978d4c7cb-4k7vb_91980b55-1251-4404-a22d-1572dd91ec8a/horizon/0.log" Feb 17 18:57:18 crc kubenswrapper[4680]: I0217 18:57:18.995985 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-flssf_3cb6431e-7411-4f54-9b55-6ba091b50804/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 18:57:19 crc kubenswrapper[4680]: I0217 18:57:19.116719 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6978d4c7cb-4k7vb_91980b55-1251-4404-a22d-1572dd91ec8a/horizon-log/0.log" Feb 17 18:57:19 crc kubenswrapper[4680]: I0217 18:57:19.197820 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-g99q6_88ff3876-8728-4981-a3c7-0ccef82dee3e/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 18:57:19 crc kubenswrapper[4680]: I0217 18:57:19.481422 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_45b4ecc2-2420-49af-9969-0799eda9ee0f/kube-state-metrics/0.log" Feb 17 18:57:19 crc kubenswrapper[4680]: I0217 18:57:19.893813 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-65z5s_d1713773-5461-439e-8d37-5c1f2b54db0a/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 18:57:19 crc kubenswrapper[4680]: I0217 18:57:19.929127 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-854d5d774d-kbx8n_7370e5a4-158b-44db-8b30-e9201681787a/keystone-api/0.log" Feb 17 18:57:20 crc kubenswrapper[4680]: I0217 18:57:20.105967 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:57:20 crc kubenswrapper[4680]: E0217 18:57:20.106379 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:57:20 crc kubenswrapper[4680]: I0217 18:57:20.654054 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z_6966330b-ec2c-4a0d-a3b2-50c582e0eb69/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 18:57:20 crc kubenswrapper[4680]: I0217 18:57:20.733335 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59fd4ccc67-hf8zf_c24ed534-12dd-4bab-aac1-3ef792cd8e01/neutron-httpd/0.log" Feb 17 18:57:20 crc kubenswrapper[4680]: I0217 18:57:20.862600 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59fd4ccc67-hf8zf_c24ed534-12dd-4bab-aac1-3ef792cd8e01/neutron-api/0.log" Feb 17 18:57:21 crc kubenswrapper[4680]: I0217 18:57:21.622658 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ac239796-b255-4e18-a027-2bd872c8ccc9/nova-cell0-conductor-conductor/0.log" Feb 17 18:57:21 crc kubenswrapper[4680]: I0217 18:57:21.971496 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_2aab84dd-8ed8-46fd-931a-31759485bb02/nova-cell1-conductor-conductor/0.log" Feb 17 18:57:22 crc kubenswrapper[4680]: I0217 18:57:22.333968 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6c1fc559-7f55-4c70-b448-921d56e4a6c3/nova-cell1-novncproxy-novncproxy/0.log" Feb 17 18:57:22 crc kubenswrapper[4680]: I0217 18:57:22.386550 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_884f4451-a521-4e0f-9fd8-5f4e9c094fb8/nova-api-log/0.log" Feb 17 18:57:22 crc kubenswrapper[4680]: I0217 18:57:22.539569 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_884f4451-a521-4e0f-9fd8-5f4e9c094fb8/nova-api-api/0.log" Feb 17 18:57:23 crc kubenswrapper[4680]: I0217 18:57:23.296847 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-plf45_111c14f3-6274-434d-a73d-09328a365bd2/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 18:57:23 crc kubenswrapper[4680]: I0217 18:57:23.421091 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e325136c-a30f-4f22-aecf-69319458b18d/nova-metadata-log/0.log" Feb 17 18:57:23 crc kubenswrapper[4680]: I0217 18:57:23.826247 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_408a63fd-0033-4bf6-925a-cc82a63b2a60/mysql-bootstrap/0.log" Feb 17 18:57:24 crc kubenswrapper[4680]: I0217 18:57:24.075827 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_13bf0e76-39eb-44b7-876e-76e06523f968/nova-scheduler-scheduler/0.log" Feb 17 18:57:24 crc kubenswrapper[4680]: I0217 18:57:24.138682 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_408a63fd-0033-4bf6-925a-cc82a63b2a60/mysql-bootstrap/0.log" Feb 17 18:57:24 crc kubenswrapper[4680]: I0217 18:57:24.240022 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_408a63fd-0033-4bf6-925a-cc82a63b2a60/galera/0.log" Feb 17 18:57:24 crc kubenswrapper[4680]: I0217 18:57:24.698156 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_92bcbcf3-8caf-4493-99ac-478e3d55a1c1/mysql-bootstrap/0.log" Feb 17 18:57:25 crc kubenswrapper[4680]: I0217 18:57:25.014031 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_92bcbcf3-8caf-4493-99ac-478e3d55a1c1/mysql-bootstrap/0.log" Feb 17 18:57:25 crc kubenswrapper[4680]: I0217 18:57:25.027902 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_92bcbcf3-8caf-4493-99ac-478e3d55a1c1/galera/0.log" Feb 17 18:57:25 crc kubenswrapper[4680]: I0217 18:57:25.123757 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e325136c-a30f-4f22-aecf-69319458b18d/nova-metadata-metadata/0.log" Feb 17 18:57:25 crc kubenswrapper[4680]: I0217 18:57:25.287264 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a0986f8d-da98-4f6e-bd4e-457e4f405022/openstackclient/0.log" Feb 17 18:57:25 crc kubenswrapper[4680]: I0217 18:57:25.354519 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7dz9r_2ce593ef-6a97-4361-af72-7637711c07b5/ovn-controller/0.log" Feb 17 18:57:25 crc kubenswrapper[4680]: I0217 18:57:25.526275 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-tw5w5_42bb122a-c2c2-4a70-a745-1a6bb9334698/openstack-network-exporter/0.log" Feb 17 18:57:25 crc kubenswrapper[4680]: I0217 18:57:25.639301 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-grk2z_8f069d89-4464-4526-ba2c-5d4a9de13e02/ovsdb-server-init/0.log" Feb 17 18:57:26 crc kubenswrapper[4680]: I0217 18:57:26.289760 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-grk2z_8f069d89-4464-4526-ba2c-5d4a9de13e02/ovsdb-server-init/0.log" Feb 17 18:57:26 crc kubenswrapper[4680]: I0217 18:57:26.324961 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-grk2z_8f069d89-4464-4526-ba2c-5d4a9de13e02/ovsdb-server/0.log" Feb 17 18:57:26 crc kubenswrapper[4680]: I0217 18:57:26.363803 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-grk2z_8f069d89-4464-4526-ba2c-5d4a9de13e02/ovs-vswitchd/0.log" Feb 17 18:57:26 crc kubenswrapper[4680]: I0217 18:57:26.594050 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-shzfm_c501ee0c-b4b4-4517-a946-acea84eaf4e9/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 18:57:26 crc kubenswrapper[4680]: I0217 18:57:26.678610 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ffab513d-4164-4043-a65b-b72c0b1a67a4/openstack-network-exporter/0.log" Feb 17 18:57:26 crc kubenswrapper[4680]: I0217 18:57:26.739753 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ffab513d-4164-4043-a65b-b72c0b1a67a4/ovn-northd/0.log" Feb 17 18:57:26 crc kubenswrapper[4680]: I0217 18:57:26.936739 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6ba0d348-8721-4d4f-9904-994b1ae56423/ovsdbserver-nb/0.log" Feb 17 18:57:26 crc kubenswrapper[4680]: I0217 18:57:26.968849 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6ba0d348-8721-4d4f-9904-994b1ae56423/openstack-network-exporter/0.log" Feb 17 18:57:27 crc kubenswrapper[4680]: I0217 18:57:27.232541 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e083ffca-d900-4b1a-bc3a-eacae2a81902/openstack-network-exporter/0.log" Feb 17 18:57:27 crc kubenswrapper[4680]: I0217 18:57:27.340945 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e083ffca-d900-4b1a-bc3a-eacae2a81902/ovsdbserver-sb/0.log" Feb 17 18:57:27 crc kubenswrapper[4680]: I0217 18:57:27.568710 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c7f4dffcd-6k6kn_48b0173d-34ae-465b-8119-b84227984ccd/placement-api/0.log" Feb 17 18:57:27 crc kubenswrapper[4680]: I0217 18:57:27.610517 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8ddff1fb-6281-40d0-82dd-7f395cd9d3dd/setup-container/0.log" Feb 17 18:57:27 crc kubenswrapper[4680]: I0217 18:57:27.695878 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c7f4dffcd-6k6kn_48b0173d-34ae-465b-8119-b84227984ccd/placement-log/0.log" Feb 17 18:57:27 crc kubenswrapper[4680]: I0217 18:57:27.995636 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f5b94b27-be97-48aa-a421-49c29363f388/setup-container/0.log" Feb 17 18:57:28 crc kubenswrapper[4680]: I0217 18:57:28.001014 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8ddff1fb-6281-40d0-82dd-7f395cd9d3dd/rabbitmq/0.log" Feb 17 18:57:28 crc kubenswrapper[4680]: I0217 18:57:28.022913 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8ddff1fb-6281-40d0-82dd-7f395cd9d3dd/setup-container/0.log" Feb 17 18:57:28 crc kubenswrapper[4680]: I0217 18:57:28.346404 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f5b94b27-be97-48aa-a421-49c29363f388/setup-container/0.log" Feb 17 18:57:28 crc kubenswrapper[4680]: I0217 18:57:28.367832 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f5b94b27-be97-48aa-a421-49c29363f388/rabbitmq/0.log" Feb 17 18:57:28 crc kubenswrapper[4680]: I0217 18:57:28.448925 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b_aaa35e71-4185-4074-8f0a-eb601168a788/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 18:57:28 crc kubenswrapper[4680]: I0217 18:57:28.719803 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-trq4p_4dbcac70-066d-4d71-8f2d-f8762e241236/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 18:57:28 crc kubenswrapper[4680]: I0217 18:57:28.748764 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8_b65551bb-7084-4453-8791-0a12d9e928e4/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 18:57:29 crc kubenswrapper[4680]: I0217 18:57:29.024398 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-5gb6z_ad93d3c0-6115-499e-b0db-6840bda12a02/ssh-known-hosts-edpm-deployment/0.log" Feb 17 18:57:29 crc kubenswrapper[4680]: I0217 18:57:29.085804 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-h76n9_5731e4b0-b432-4c63-aa65-59eed4740731/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 18:57:29 crc kubenswrapper[4680]: I0217 18:57:29.312368 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5c85cf9547-qhbrg_fc8411d6-1f53-4a7f-8528-0b3719bc5a6f/proxy-server/0.log" Feb 17 18:57:29 crc kubenswrapper[4680]: I0217 18:57:29.567563 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5c85cf9547-qhbrg_fc8411d6-1f53-4a7f-8528-0b3719bc5a6f/proxy-httpd/0.log" Feb 17 18:57:29 crc kubenswrapper[4680]: I0217 18:57:29.596845 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-b9q8k_e281ffae-b948-40c0-b8a1-68805d081000/swift-ring-rebalance/0.log" Feb 17 18:57:29 crc kubenswrapper[4680]: I0217 18:57:29.961778 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/account-auditor/0.log" Feb 17 18:57:30 crc kubenswrapper[4680]: I0217 18:57:30.104465 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/account-reaper/0.log" Feb 17 18:57:30 crc kubenswrapper[4680]: I0217 18:57:30.212842 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/account-server/0.log" Feb 17 18:57:30 crc kubenswrapper[4680]: I0217 18:57:30.296515 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/account-replicator/0.log" Feb 17 18:57:30 crc kubenswrapper[4680]: I0217 18:57:30.358715 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/container-auditor/0.log" Feb 17 18:57:30 crc kubenswrapper[4680]: I0217 18:57:30.379127 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/container-replicator/0.log" Feb 17 18:57:30 crc kubenswrapper[4680]: I0217 18:57:30.520478 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/container-server/0.log" Feb 17 18:57:30 crc kubenswrapper[4680]: I0217 18:57:30.583869 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/container-updater/0.log" Feb 17 18:57:30 crc kubenswrapper[4680]: I0217 18:57:30.586693 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/object-auditor/0.log" Feb 17 18:57:30 crc kubenswrapper[4680]: I0217 18:57:30.689122 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/object-expirer/0.log" Feb 17 18:57:30 crc kubenswrapper[4680]: I0217 18:57:30.909216 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/object-replicator/0.log" Feb 17 18:57:30 crc kubenswrapper[4680]: I0217 18:57:30.912898 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/object-server/0.log" Feb 17 18:57:30 crc kubenswrapper[4680]: I0217 18:57:30.925740 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/object-updater/0.log" Feb 17 18:57:30 crc kubenswrapper[4680]: I0217 18:57:30.961705 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/rsync/0.log" Feb 17 18:57:31 crc kubenswrapper[4680]: I0217 18:57:31.501606 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/swift-recon-cron/0.log" Feb 17 18:57:31 crc kubenswrapper[4680]: I0217 18:57:31.774575 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-grdvz_030c3571-afbe-48b3-9a59-890b63527ab7/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 18:57:31 crc kubenswrapper[4680]: I0217 18:57:31.835796 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_327c58a4-13db-4bf2-b6e4-755c23c2a36f/tempest-tests-tempest-tests-runner/0.log" Feb 17 18:57:31 crc kubenswrapper[4680]: I0217 18:57:31.920133 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_8d217e59-e2e9-4cfa-a272-5cc6c543347f/test-operator-logs-container/0.log" Feb 17 18:57:32 crc kubenswrapper[4680]: I0217 18:57:32.123275 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm_f8db14b6-26ac-4b47-a820-ff6f4ce152dd/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 18:57:34 crc kubenswrapper[4680]: I0217 18:57:34.105068 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:57:34 crc kubenswrapper[4680]: E0217 18:57:34.106441 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:57:39 crc kubenswrapper[4680]: I0217 18:57:39.426209 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_391eec5a-7bea-47ed-963a-8152e1db2357/memcached/0.log" Feb 17 18:57:45 crc kubenswrapper[4680]: I0217 18:57:45.107303 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:57:45 crc kubenswrapper[4680]: E0217 18:57:45.107932 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:58:00 crc kubenswrapper[4680]: I0217 18:58:00.105903 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:58:00 crc kubenswrapper[4680]: E0217 18:58:00.106534 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:58:03 crc kubenswrapper[4680]: I0217 18:58:03.474906 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr_ad62ce23-e261-4066-9465-29efa5b1eb6d/util/0.log" Feb 17 18:58:03 crc kubenswrapper[4680]: I0217 18:58:03.728421 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr_ad62ce23-e261-4066-9465-29efa5b1eb6d/pull/0.log" Feb 17 18:58:03 crc kubenswrapper[4680]: I0217 18:58:03.736917 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr_ad62ce23-e261-4066-9465-29efa5b1eb6d/pull/0.log" Feb 17 18:58:03 crc kubenswrapper[4680]: I0217 18:58:03.743871 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr_ad62ce23-e261-4066-9465-29efa5b1eb6d/util/0.log" Feb 17 18:58:03 crc kubenswrapper[4680]: I0217 18:58:03.889720 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr_ad62ce23-e261-4066-9465-29efa5b1eb6d/util/0.log" Feb 17 18:58:03 crc kubenswrapper[4680]: I0217 18:58:03.894698 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr_ad62ce23-e261-4066-9465-29efa5b1eb6d/pull/0.log" Feb 17 18:58:03 crc kubenswrapper[4680]: I0217 18:58:03.973392 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr_ad62ce23-e261-4066-9465-29efa5b1eb6d/extract/0.log" Feb 17 18:58:04 crc kubenswrapper[4680]: I0217 18:58:04.497180 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-64dzx_5e432327-20d1-4688-8dda-173b62e27110/manager/0.log" Feb 17 18:58:04 crc kubenswrapper[4680]: I0217 18:58:04.938408 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-zq66h_3a32bda7-58e5-494b-af42-d255934537ac/manager/0.log" Feb 17 18:58:05 crc kubenswrapper[4680]: I0217 18:58:05.200981 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-mqdqj_dd0b1acf-7e52-47f1-9fa3-c82be1b7ce8f/manager/0.log" Feb 17 18:58:05 crc kubenswrapper[4680]: I0217 18:58:05.556186 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-wpzsn_85f8c95f-eecb-465c-be1f-c311edcbe597/manager/0.log" Feb 17 18:58:06 crc kubenswrapper[4680]: I0217 18:58:06.338984 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-ff5c8777-ft44h_a61ed4b7-335a-429e-8c5a-17b2d322c78e/manager/0.log" Feb 17 18:58:06 crc kubenswrapper[4680]: I0217 18:58:06.623478 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-nbwj4_7030ebc6-bc71-4388-97c3-14403919b886/manager/0.log" Feb 17 18:58:07 crc kubenswrapper[4680]: I0217 18:58:07.134099 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-xqwqr_09667f14-d7ef-4502-9880-b0f14e2247b8/manager/0.log" Feb 17 18:58:07 crc kubenswrapper[4680]: I0217 18:58:07.179614 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-fnnbk_25257448-9b91-4b64-9279-e3e51e752019/manager/0.log" Feb 17 18:58:07 crc kubenswrapper[4680]: I0217 18:58:07.594113 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-bxxqw_01a45563-cabc-4dbc-80cb-49b274ab96cd/manager/0.log" Feb 17 18:58:07 crc kubenswrapper[4680]: I0217 18:58:07.792720 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-th6kc_27bd44a3-f6bc-462d-92a4-53287bd71b8f/manager/0.log" Feb 17 18:58:08 crc kubenswrapper[4680]: I0217 18:58:08.340616 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-n45mv_44afd1e8-ecdc-4854-bafa-d2bdabe26c5b/manager/0.log" Feb 17 18:58:08 crc kubenswrapper[4680]: I0217 18:58:08.551586 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-jpnmh_73c67b77-ecaf-4925-aba5-855dc368e729/manager/0.log" Feb 17 18:58:08 crc kubenswrapper[4680]: I0217 18:58:08.913132 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv_3a909135-a1d3-4a69-875d-74f19de59cf1/manager/0.log" Feb 17 18:58:09 crc kubenswrapper[4680]: I0217 18:58:09.140824 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5498fb9db9-fg6q4_c343adc1-0596-4313-9d07-e7a140130045/operator/0.log" Feb 17 18:58:09 crc kubenswrapper[4680]: I0217 18:58:09.375944 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cmvdn_33ad8a07-6bf1-4c03-a44f-5462b01d5a82/registry-server/0.log" Feb 17 18:58:09 crc kubenswrapper[4680]: I0217 18:58:09.716402 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-htc8b_36e854c8-4742-4138-8766-6614fa005bae/manager/0.log" Feb 17 18:58:10 crc kubenswrapper[4680]: I0217 18:58:10.091365 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-rw6wk_eeb4f8c1-4744-4e7f-a8a4-a2deb95f4b45/manager/0.log" Feb 17 18:58:10 crc kubenswrapper[4680]: I0217 18:58:10.706034 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-6k4kw_7e8462ff-c9e3-4cff-bd4b-7f94d8acb8b1/operator/0.log" Feb 17 18:58:10 crc kubenswrapper[4680]: I0217 18:58:10.972727 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-d5n5g_41d9cacc-1317-479d-99ff-8eb7e6f3d6b6/manager/0.log" Feb 17 18:58:11 crc kubenswrapper[4680]: I0217 18:58:11.315790 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-6628n_41c69a9d-0d86-4088-9490-262b3eff9441/manager/0.log" Feb 17 18:58:11 crc kubenswrapper[4680]: I0217 18:58:11.326375 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-7pstg_5663f682-3b3a-4061-8d4a-377a103c0bd4/manager/0.log" Feb 17 18:58:11 crc kubenswrapper[4680]: I0217 18:58:11.344086 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-b96b9dfc9-r9bnq_1982baea-3b82-4376-ab68-6559095032d5/manager/0.log" Feb 17 18:58:11 crc kubenswrapper[4680]: I0217 18:58:11.485189 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-vgwfn_6f60616b-3009-4961-af69-11a3d3e42f4b/manager/0.log" Feb 17 18:58:11 crc kubenswrapper[4680]: I0217 18:58:11.622460 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-t8p47_827a72ee-4516-4fe2-866a-a87124fe7439/manager/0.log" Feb 17 18:58:13 crc kubenswrapper[4680]: I0217 18:58:13.105018 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:58:13 crc kubenswrapper[4680]: E0217 18:58:13.105529 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:58:14 crc kubenswrapper[4680]: I0217 18:58:14.063029 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-9qzbv_c4d30f2e-a54a-43ef-923f-103f4e0c54d5/manager/0.log" Feb 17 18:58:24 crc kubenswrapper[4680]: I0217 18:58:24.106241 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:58:24 crc kubenswrapper[4680]: E0217 18:58:24.106989 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:58:35 crc kubenswrapper[4680]: I0217 18:58:35.870258 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-c462h_67e5bb04-4920-4f51-a729-0af77b16490e/control-plane-machine-set-operator/0.log" Feb 17 18:58:36 crc kubenswrapper[4680]: I0217 18:58:36.150752 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vhvpq_3abf57a9-2bd6-4f7f-ba75-5fe928201df8/kube-rbac-proxy/0.log" Feb 17 18:58:36 crc kubenswrapper[4680]: I0217 18:58:36.175723 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vhvpq_3abf57a9-2bd6-4f7f-ba75-5fe928201df8/machine-api-operator/0.log" Feb 17 18:58:37 crc kubenswrapper[4680]: I0217 18:58:37.148370 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9pf6p"] Feb 17 18:58:37 crc kubenswrapper[4680]: E0217 18:58:37.148777 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f50257e-c809-48dd-ac1f-7ba9694d0e64" containerName="container-00" Feb 17 18:58:37 crc kubenswrapper[4680]: I0217 18:58:37.148788 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f50257e-c809-48dd-ac1f-7ba9694d0e64" containerName="container-00" Feb 17 18:58:37 crc kubenswrapper[4680]: I0217 18:58:37.148962 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f50257e-c809-48dd-ac1f-7ba9694d0e64" containerName="container-00" Feb 17 18:58:37 crc kubenswrapper[4680]: I0217 18:58:37.150231 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pf6p" Feb 17 18:58:37 crc kubenswrapper[4680]: I0217 18:58:37.173603 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9pf6p"] Feb 17 18:58:37 crc kubenswrapper[4680]: I0217 18:58:37.307689 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4344c10a-b977-4519-baa6-27a6329da4de-catalog-content\") pod \"certified-operators-9pf6p\" (UID: \"4344c10a-b977-4519-baa6-27a6329da4de\") " pod="openshift-marketplace/certified-operators-9pf6p" Feb 17 18:58:37 crc kubenswrapper[4680]: I0217 18:58:37.307752 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4344c10a-b977-4519-baa6-27a6329da4de-utilities\") pod \"certified-operators-9pf6p\" (UID: \"4344c10a-b977-4519-baa6-27a6329da4de\") " pod="openshift-marketplace/certified-operators-9pf6p" Feb 17 18:58:37 crc kubenswrapper[4680]: I0217 18:58:37.307797 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzmx9\" (UniqueName: \"kubernetes.io/projected/4344c10a-b977-4519-baa6-27a6329da4de-kube-api-access-vzmx9\") pod \"certified-operators-9pf6p\" (UID: \"4344c10a-b977-4519-baa6-27a6329da4de\") " pod="openshift-marketplace/certified-operators-9pf6p" Feb 17 18:58:37 crc kubenswrapper[4680]: I0217 18:58:37.409505 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4344c10a-b977-4519-baa6-27a6329da4de-catalog-content\") pod \"certified-operators-9pf6p\" (UID: \"4344c10a-b977-4519-baa6-27a6329da4de\") " pod="openshift-marketplace/certified-operators-9pf6p" Feb 17 18:58:37 crc kubenswrapper[4680]: I0217 18:58:37.409590 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4344c10a-b977-4519-baa6-27a6329da4de-utilities\") pod \"certified-operators-9pf6p\" (UID: \"4344c10a-b977-4519-baa6-27a6329da4de\") " pod="openshift-marketplace/certified-operators-9pf6p" Feb 17 18:58:37 crc kubenswrapper[4680]: I0217 18:58:37.409654 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzmx9\" (UniqueName: \"kubernetes.io/projected/4344c10a-b977-4519-baa6-27a6329da4de-kube-api-access-vzmx9\") pod \"certified-operators-9pf6p\" (UID: \"4344c10a-b977-4519-baa6-27a6329da4de\") " pod="openshift-marketplace/certified-operators-9pf6p" Feb 17 18:58:37 crc kubenswrapper[4680]: I0217 18:58:37.410074 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4344c10a-b977-4519-baa6-27a6329da4de-catalog-content\") pod \"certified-operators-9pf6p\" (UID: \"4344c10a-b977-4519-baa6-27a6329da4de\") " pod="openshift-marketplace/certified-operators-9pf6p" Feb 17 18:58:37 crc kubenswrapper[4680]: I0217 18:58:37.410232 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4344c10a-b977-4519-baa6-27a6329da4de-utilities\") pod \"certified-operators-9pf6p\" (UID: \"4344c10a-b977-4519-baa6-27a6329da4de\") " pod="openshift-marketplace/certified-operators-9pf6p" Feb 17 18:58:37 crc kubenswrapper[4680]: I0217 18:58:37.434644 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzmx9\" (UniqueName: \"kubernetes.io/projected/4344c10a-b977-4519-baa6-27a6329da4de-kube-api-access-vzmx9\") pod \"certified-operators-9pf6p\" (UID: \"4344c10a-b977-4519-baa6-27a6329da4de\") " pod="openshift-marketplace/certified-operators-9pf6p" Feb 17 18:58:37 crc kubenswrapper[4680]: I0217 18:58:37.466159 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pf6p" Feb 17 18:58:38 crc kubenswrapper[4680]: I0217 18:58:38.266379 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9pf6p"] Feb 17 18:58:38 crc kubenswrapper[4680]: I0217 18:58:38.543227 4680 generic.go:334] "Generic (PLEG): container finished" podID="4344c10a-b977-4519-baa6-27a6329da4de" containerID="2ef0fec53281d7326223a0bf11e003d25b09e9a406f0ff8975df63a4391f138e" exitCode=0 Feb 17 18:58:38 crc kubenswrapper[4680]: I0217 18:58:38.543279 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pf6p" event={"ID":"4344c10a-b977-4519-baa6-27a6329da4de","Type":"ContainerDied","Data":"2ef0fec53281d7326223a0bf11e003d25b09e9a406f0ff8975df63a4391f138e"} Feb 17 18:58:38 crc kubenswrapper[4680]: I0217 18:58:38.543306 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pf6p" event={"ID":"4344c10a-b977-4519-baa6-27a6329da4de","Type":"ContainerStarted","Data":"e77fb3ce401d3b33e6b879eba64560081a9bc960d7f36498e2e71b314b17dce3"} Feb 17 18:58:38 crc kubenswrapper[4680]: I0217 18:58:38.544956 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 18:58:39 crc kubenswrapper[4680]: I0217 18:58:39.106129 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:58:39 crc kubenswrapper[4680]: E0217 18:58:39.106746 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:58:39 crc kubenswrapper[4680]: I0217 18:58:39.560552 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xt4mt"] Feb 17 18:58:39 crc kubenswrapper[4680]: I0217 18:58:39.562671 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xt4mt" Feb 17 18:58:39 crc kubenswrapper[4680]: I0217 18:58:39.565629 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pf6p" event={"ID":"4344c10a-b977-4519-baa6-27a6329da4de","Type":"ContainerStarted","Data":"f36ad2aa72b0535f5dbd47de72df6ca01e3d37e3cf667932c909180d4a11fea0"} Feb 17 18:58:39 crc kubenswrapper[4680]: I0217 18:58:39.570311 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71028606-dbd8-4bc3-902a-2c56f89e2341-utilities\") pod \"redhat-marketplace-xt4mt\" (UID: \"71028606-dbd8-4bc3-902a-2c56f89e2341\") " pod="openshift-marketplace/redhat-marketplace-xt4mt" Feb 17 18:58:39 crc kubenswrapper[4680]: I0217 18:58:39.574922 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xt4mt"] Feb 17 18:58:39 crc kubenswrapper[4680]: I0217 18:58:39.582394 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71028606-dbd8-4bc3-902a-2c56f89e2341-catalog-content\") pod \"redhat-marketplace-xt4mt\" (UID: \"71028606-dbd8-4bc3-902a-2c56f89e2341\") " pod="openshift-marketplace/redhat-marketplace-xt4mt" Feb 17 18:58:39 crc kubenswrapper[4680]: I0217 18:58:39.582543 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klppm\" (UniqueName: \"kubernetes.io/projected/71028606-dbd8-4bc3-902a-2c56f89e2341-kube-api-access-klppm\") pod \"redhat-marketplace-xt4mt\" (UID: \"71028606-dbd8-4bc3-902a-2c56f89e2341\") " pod="openshift-marketplace/redhat-marketplace-xt4mt" Feb 17 18:58:39 crc kubenswrapper[4680]: I0217 18:58:39.684796 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71028606-dbd8-4bc3-902a-2c56f89e2341-utilities\") pod \"redhat-marketplace-xt4mt\" (UID: \"71028606-dbd8-4bc3-902a-2c56f89e2341\") " pod="openshift-marketplace/redhat-marketplace-xt4mt" Feb 17 18:58:39 crc kubenswrapper[4680]: I0217 18:58:39.684907 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71028606-dbd8-4bc3-902a-2c56f89e2341-catalog-content\") pod \"redhat-marketplace-xt4mt\" (UID: \"71028606-dbd8-4bc3-902a-2c56f89e2341\") " pod="openshift-marketplace/redhat-marketplace-xt4mt" Feb 17 18:58:39 crc kubenswrapper[4680]: I0217 18:58:39.684959 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klppm\" (UniqueName: \"kubernetes.io/projected/71028606-dbd8-4bc3-902a-2c56f89e2341-kube-api-access-klppm\") pod \"redhat-marketplace-xt4mt\" (UID: \"71028606-dbd8-4bc3-902a-2c56f89e2341\") " pod="openshift-marketplace/redhat-marketplace-xt4mt" Feb 17 18:58:39 crc kubenswrapper[4680]: I0217 18:58:39.685434 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71028606-dbd8-4bc3-902a-2c56f89e2341-utilities\") pod \"redhat-marketplace-xt4mt\" (UID: \"71028606-dbd8-4bc3-902a-2c56f89e2341\") " pod="openshift-marketplace/redhat-marketplace-xt4mt" Feb 17 18:58:39 crc kubenswrapper[4680]: I0217 18:58:39.685545 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71028606-dbd8-4bc3-902a-2c56f89e2341-catalog-content\") pod \"redhat-marketplace-xt4mt\" (UID: \"71028606-dbd8-4bc3-902a-2c56f89e2341\") " pod="openshift-marketplace/redhat-marketplace-xt4mt" Feb 17 18:58:39 crc kubenswrapper[4680]: I0217 18:58:39.707137 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klppm\" (UniqueName: \"kubernetes.io/projected/71028606-dbd8-4bc3-902a-2c56f89e2341-kube-api-access-klppm\") pod \"redhat-marketplace-xt4mt\" (UID: \"71028606-dbd8-4bc3-902a-2c56f89e2341\") " pod="openshift-marketplace/redhat-marketplace-xt4mt" Feb 17 18:58:39 crc kubenswrapper[4680]: I0217 18:58:39.910042 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xt4mt" Feb 17 18:58:40 crc kubenswrapper[4680]: I0217 18:58:40.216474 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xt4mt"] Feb 17 18:58:40 crc kubenswrapper[4680]: I0217 18:58:40.576026 4680 generic.go:334] "Generic (PLEG): container finished" podID="71028606-dbd8-4bc3-902a-2c56f89e2341" containerID="5d08a9bdb2d31506186920294b2b5012e440df35db642ec1c8a4042776c75500" exitCode=0 Feb 17 18:58:40 crc kubenswrapper[4680]: I0217 18:58:40.576082 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xt4mt" event={"ID":"71028606-dbd8-4bc3-902a-2c56f89e2341","Type":"ContainerDied","Data":"5d08a9bdb2d31506186920294b2b5012e440df35db642ec1c8a4042776c75500"} Feb 17 18:58:40 crc kubenswrapper[4680]: I0217 18:58:40.577442 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xt4mt" event={"ID":"71028606-dbd8-4bc3-902a-2c56f89e2341","Type":"ContainerStarted","Data":"146170fa800c65a376f640ecb09f2e4a8cb273a891dc7e83bccc78b139e438df"} Feb 17 18:58:41 crc kubenswrapper[4680]: I0217 18:58:41.598431 4680 generic.go:334] "Generic (PLEG): container finished" podID="4344c10a-b977-4519-baa6-27a6329da4de" containerID="f36ad2aa72b0535f5dbd47de72df6ca01e3d37e3cf667932c909180d4a11fea0" exitCode=0 Feb 17 18:58:41 crc kubenswrapper[4680]: I0217 18:58:41.598629 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pf6p" event={"ID":"4344c10a-b977-4519-baa6-27a6329da4de","Type":"ContainerDied","Data":"f36ad2aa72b0535f5dbd47de72df6ca01e3d37e3cf667932c909180d4a11fea0"} Feb 17 18:58:42 crc kubenswrapper[4680]: I0217 18:58:42.648069 4680 generic.go:334] "Generic (PLEG): container finished" podID="71028606-dbd8-4bc3-902a-2c56f89e2341" containerID="7a6e0dcf906e99f91303aebc41f2a5070e63ac3f2f09c7cfd6fcdb2f03d60509" exitCode=0 Feb 17 18:58:42 crc kubenswrapper[4680]: I0217 18:58:42.648427 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xt4mt" event={"ID":"71028606-dbd8-4bc3-902a-2c56f89e2341","Type":"ContainerDied","Data":"7a6e0dcf906e99f91303aebc41f2a5070e63ac3f2f09c7cfd6fcdb2f03d60509"} Feb 17 18:58:42 crc kubenswrapper[4680]: I0217 18:58:42.659254 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pf6p" event={"ID":"4344c10a-b977-4519-baa6-27a6329da4de","Type":"ContainerStarted","Data":"b66b7a9098a9489d4a5ea71b61d016617f1e4c2e0b0d6fe3c3ff23e90f81a8a3"} Feb 17 18:58:42 crc kubenswrapper[4680]: I0217 18:58:42.708671 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9pf6p" podStartSLOduration=2.252767462 podStartE2EDuration="5.708651188s" podCreationTimestamp="2026-02-17 18:58:37 +0000 UTC" firstStartedPulling="2026-02-17 18:58:38.544735294 +0000 UTC m=+4448.095633973" lastFinishedPulling="2026-02-17 18:58:42.00061902 +0000 UTC m=+4451.551517699" observedRunningTime="2026-02-17 18:58:42.705781499 +0000 UTC m=+4452.256680188" watchObservedRunningTime="2026-02-17 18:58:42.708651188 +0000 UTC m=+4452.259549867" Feb 17 18:58:42 crc kubenswrapper[4680]: E0217 18:58:42.741511 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71028606_dbd8_4bc3_902a_2c56f89e2341.slice/crio-7a6e0dcf906e99f91303aebc41f2a5070e63ac3f2f09c7cfd6fcdb2f03d60509.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71028606_dbd8_4bc3_902a_2c56f89e2341.slice/crio-conmon-7a6e0dcf906e99f91303aebc41f2a5070e63ac3f2f09c7cfd6fcdb2f03d60509.scope\": RecentStats: unable to find data in memory cache]" Feb 17 18:58:43 crc kubenswrapper[4680]: I0217 18:58:43.672682 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xt4mt" event={"ID":"71028606-dbd8-4bc3-902a-2c56f89e2341","Type":"ContainerStarted","Data":"9570fcfe1550818d4bda257a00c2a25451405fd2ef5395bac2e034b2e9c270d3"} Feb 17 18:58:43 crc kubenswrapper[4680]: I0217 18:58:43.696910 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xt4mt" podStartSLOduration=2.180154848 podStartE2EDuration="4.696883777s" podCreationTimestamp="2026-02-17 18:58:39 +0000 UTC" firstStartedPulling="2026-02-17 18:58:40.577607199 +0000 UTC m=+4450.128505888" lastFinishedPulling="2026-02-17 18:58:43.094336148 +0000 UTC m=+4452.645234817" observedRunningTime="2026-02-17 18:58:43.689334845 +0000 UTC m=+4453.240233524" watchObservedRunningTime="2026-02-17 18:58:43.696883777 +0000 UTC m=+4453.247782456" Feb 17 18:58:47 crc kubenswrapper[4680]: I0217 18:58:47.467332 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9pf6p" Feb 17 18:58:47 crc kubenswrapper[4680]: I0217 18:58:47.467932 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9pf6p" Feb 17 18:58:47 crc kubenswrapper[4680]: I0217 18:58:47.511684 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9pf6p" Feb 17 18:58:48 crc kubenswrapper[4680]: I0217 18:58:48.149881 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9pf6p" Feb 17 18:58:48 crc kubenswrapper[4680]: I0217 18:58:48.734760 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9pf6p"] Feb 17 18:58:49 crc kubenswrapper[4680]: I0217 18:58:49.719134 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9pf6p" podUID="4344c10a-b977-4519-baa6-27a6329da4de" containerName="registry-server" containerID="cri-o://b66b7a9098a9489d4a5ea71b61d016617f1e4c2e0b0d6fe3c3ff23e90f81a8a3" gracePeriod=2 Feb 17 18:58:49 crc kubenswrapper[4680]: I0217 18:58:49.910514 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xt4mt" Feb 17 18:58:49 crc kubenswrapper[4680]: I0217 18:58:49.910821 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xt4mt" Feb 17 18:58:49 crc kubenswrapper[4680]: I0217 18:58:49.966136 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xt4mt" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.180211 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pf6p" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.207128 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzmx9\" (UniqueName: \"kubernetes.io/projected/4344c10a-b977-4519-baa6-27a6329da4de-kube-api-access-vzmx9\") pod \"4344c10a-b977-4519-baa6-27a6329da4de\" (UID: \"4344c10a-b977-4519-baa6-27a6329da4de\") " Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.207375 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4344c10a-b977-4519-baa6-27a6329da4de-utilities\") pod \"4344c10a-b977-4519-baa6-27a6329da4de\" (UID: \"4344c10a-b977-4519-baa6-27a6329da4de\") " Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.207404 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4344c10a-b977-4519-baa6-27a6329da4de-catalog-content\") pod \"4344c10a-b977-4519-baa6-27a6329da4de\" (UID: \"4344c10a-b977-4519-baa6-27a6329da4de\") " Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.212178 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4344c10a-b977-4519-baa6-27a6329da4de-utilities" (OuterVolumeSpecName: "utilities") pod "4344c10a-b977-4519-baa6-27a6329da4de" (UID: "4344c10a-b977-4519-baa6-27a6329da4de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.220483 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4344c10a-b977-4519-baa6-27a6329da4de-kube-api-access-vzmx9" (OuterVolumeSpecName: "kube-api-access-vzmx9") pod "4344c10a-b977-4519-baa6-27a6329da4de" (UID: "4344c10a-b977-4519-baa6-27a6329da4de"). InnerVolumeSpecName "kube-api-access-vzmx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.269816 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4344c10a-b977-4519-baa6-27a6329da4de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4344c10a-b977-4519-baa6-27a6329da4de" (UID: "4344c10a-b977-4519-baa6-27a6329da4de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.309098 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4344c10a-b977-4519-baa6-27a6329da4de-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.309126 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4344c10a-b977-4519-baa6-27a6329da4de-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.309139 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzmx9\" (UniqueName: \"kubernetes.io/projected/4344c10a-b977-4519-baa6-27a6329da4de-kube-api-access-vzmx9\") on node \"crc\" DevicePath \"\"" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.729442 4680 generic.go:334] "Generic (PLEG): container finished" podID="4344c10a-b977-4519-baa6-27a6329da4de" containerID="b66b7a9098a9489d4a5ea71b61d016617f1e4c2e0b0d6fe3c3ff23e90f81a8a3" exitCode=0 Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.729508 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pf6p" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.729558 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pf6p" event={"ID":"4344c10a-b977-4519-baa6-27a6329da4de","Type":"ContainerDied","Data":"b66b7a9098a9489d4a5ea71b61d016617f1e4c2e0b0d6fe3c3ff23e90f81a8a3"} Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.729590 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pf6p" event={"ID":"4344c10a-b977-4519-baa6-27a6329da4de","Type":"ContainerDied","Data":"e77fb3ce401d3b33e6b879eba64560081a9bc960d7f36498e2e71b314b17dce3"} Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.729611 4680 scope.go:117] "RemoveContainer" containerID="b66b7a9098a9489d4a5ea71b61d016617f1e4c2e0b0d6fe3c3ff23e90f81a8a3" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.753082 4680 scope.go:117] "RemoveContainer" containerID="f36ad2aa72b0535f5dbd47de72df6ca01e3d37e3cf667932c909180d4a11fea0" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.790977 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9pf6p"] Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.801463 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9pf6p"] Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.804702 4680 scope.go:117] "RemoveContainer" containerID="2ef0fec53281d7326223a0bf11e003d25b09e9a406f0ff8975df63a4391f138e" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.806669 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xt4mt" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.832936 4680 scope.go:117] "RemoveContainer" containerID="b66b7a9098a9489d4a5ea71b61d016617f1e4c2e0b0d6fe3c3ff23e90f81a8a3" Feb 17 18:58:50 crc kubenswrapper[4680]: E0217 18:58:50.833982 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b66b7a9098a9489d4a5ea71b61d016617f1e4c2e0b0d6fe3c3ff23e90f81a8a3\": container with ID starting with b66b7a9098a9489d4a5ea71b61d016617f1e4c2e0b0d6fe3c3ff23e90f81a8a3 not found: ID does not exist" containerID="b66b7a9098a9489d4a5ea71b61d016617f1e4c2e0b0d6fe3c3ff23e90f81a8a3" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.834024 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b66b7a9098a9489d4a5ea71b61d016617f1e4c2e0b0d6fe3c3ff23e90f81a8a3"} err="failed to get container status \"b66b7a9098a9489d4a5ea71b61d016617f1e4c2e0b0d6fe3c3ff23e90f81a8a3\": rpc error: code = NotFound desc = could not find container \"b66b7a9098a9489d4a5ea71b61d016617f1e4c2e0b0d6fe3c3ff23e90f81a8a3\": container with ID starting with b66b7a9098a9489d4a5ea71b61d016617f1e4c2e0b0d6fe3c3ff23e90f81a8a3 not found: ID does not exist" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.834048 4680 scope.go:117] "RemoveContainer" containerID="f36ad2aa72b0535f5dbd47de72df6ca01e3d37e3cf667932c909180d4a11fea0" Feb 17 18:58:50 crc kubenswrapper[4680]: E0217 18:58:50.834500 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f36ad2aa72b0535f5dbd47de72df6ca01e3d37e3cf667932c909180d4a11fea0\": container with ID starting with f36ad2aa72b0535f5dbd47de72df6ca01e3d37e3cf667932c909180d4a11fea0 not found: ID does not exist" containerID="f36ad2aa72b0535f5dbd47de72df6ca01e3d37e3cf667932c909180d4a11fea0" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.834523 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f36ad2aa72b0535f5dbd47de72df6ca01e3d37e3cf667932c909180d4a11fea0"} err="failed to get container status \"f36ad2aa72b0535f5dbd47de72df6ca01e3d37e3cf667932c909180d4a11fea0\": rpc error: code = NotFound desc = could not find container \"f36ad2aa72b0535f5dbd47de72df6ca01e3d37e3cf667932c909180d4a11fea0\": container with ID starting with f36ad2aa72b0535f5dbd47de72df6ca01e3d37e3cf667932c909180d4a11fea0 not found: ID does not exist" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.834539 4680 scope.go:117] "RemoveContainer" containerID="2ef0fec53281d7326223a0bf11e003d25b09e9a406f0ff8975df63a4391f138e" Feb 17 18:58:50 crc kubenswrapper[4680]: E0217 18:58:50.834761 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ef0fec53281d7326223a0bf11e003d25b09e9a406f0ff8975df63a4391f138e\": container with ID starting with 2ef0fec53281d7326223a0bf11e003d25b09e9a406f0ff8975df63a4391f138e not found: ID does not exist" containerID="2ef0fec53281d7326223a0bf11e003d25b09e9a406f0ff8975df63a4391f138e" Feb 17 18:58:50 crc kubenswrapper[4680]: I0217 18:58:50.834782 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef0fec53281d7326223a0bf11e003d25b09e9a406f0ff8975df63a4391f138e"} err="failed to get container status \"2ef0fec53281d7326223a0bf11e003d25b09e9a406f0ff8975df63a4391f138e\": rpc error: code = NotFound desc = could not find container \"2ef0fec53281d7326223a0bf11e003d25b09e9a406f0ff8975df63a4391f138e\": container with ID starting with 2ef0fec53281d7326223a0bf11e003d25b09e9a406f0ff8975df63a4391f138e not found: ID does not exist" Feb 17 18:58:51 crc kubenswrapper[4680]: I0217 18:58:51.119174 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4344c10a-b977-4519-baa6-27a6329da4de" path="/var/lib/kubelet/pods/4344c10a-b977-4519-baa6-27a6329da4de/volumes" Feb 17 18:58:51 crc kubenswrapper[4680]: I0217 18:58:51.119226 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:58:51 crc kubenswrapper[4680]: E0217 18:58:51.119612 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:58:51 crc kubenswrapper[4680]: I0217 18:58:51.937191 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xt4mt"] Feb 17 18:58:52 crc kubenswrapper[4680]: I0217 18:58:52.026919 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-6mmb4_07caac96-9977-4dd7-9220-4ba6d1934a4c/cert-manager-controller/0.log" Feb 17 18:58:52 crc kubenswrapper[4680]: I0217 18:58:52.299218 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-r5b7s_0b2cbbee-856e-43b8-afa2-e3cbd86cf3cc/cert-manager-cainjector/0.log" Feb 17 18:58:52 crc kubenswrapper[4680]: I0217 18:58:52.340068 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-h4nrt_98cd155c-93dd-44c8-b05b-b3375c0eb430/cert-manager-webhook/0.log" Feb 17 18:58:52 crc kubenswrapper[4680]: I0217 18:58:52.746968 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xt4mt" podUID="71028606-dbd8-4bc3-902a-2c56f89e2341" containerName="registry-server" containerID="cri-o://9570fcfe1550818d4bda257a00c2a25451405fd2ef5395bac2e034b2e9c270d3" gracePeriod=2 Feb 17 18:58:52 crc kubenswrapper[4680]: E0217 18:58:52.972210 4680 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71028606_dbd8_4bc3_902a_2c56f89e2341.slice/crio-conmon-9570fcfe1550818d4bda257a00c2a25451405fd2ef5395bac2e034b2e9c270d3.scope\": RecentStats: unable to find data in memory cache]" Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.251723 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xt4mt" Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.281002 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klppm\" (UniqueName: \"kubernetes.io/projected/71028606-dbd8-4bc3-902a-2c56f89e2341-kube-api-access-klppm\") pod \"71028606-dbd8-4bc3-902a-2c56f89e2341\" (UID: \"71028606-dbd8-4bc3-902a-2c56f89e2341\") " Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.281166 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71028606-dbd8-4bc3-902a-2c56f89e2341-catalog-content\") pod \"71028606-dbd8-4bc3-902a-2c56f89e2341\" (UID: \"71028606-dbd8-4bc3-902a-2c56f89e2341\") " Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.281385 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71028606-dbd8-4bc3-902a-2c56f89e2341-utilities\") pod \"71028606-dbd8-4bc3-902a-2c56f89e2341\" (UID: \"71028606-dbd8-4bc3-902a-2c56f89e2341\") " Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.282255 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71028606-dbd8-4bc3-902a-2c56f89e2341-utilities" (OuterVolumeSpecName: "utilities") pod "71028606-dbd8-4bc3-902a-2c56f89e2341" (UID: "71028606-dbd8-4bc3-902a-2c56f89e2341"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.286491 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71028606-dbd8-4bc3-902a-2c56f89e2341-kube-api-access-klppm" (OuterVolumeSpecName: "kube-api-access-klppm") pod "71028606-dbd8-4bc3-902a-2c56f89e2341" (UID: "71028606-dbd8-4bc3-902a-2c56f89e2341"). InnerVolumeSpecName "kube-api-access-klppm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.334511 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71028606-dbd8-4bc3-902a-2c56f89e2341-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71028606-dbd8-4bc3-902a-2c56f89e2341" (UID: "71028606-dbd8-4bc3-902a-2c56f89e2341"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.382928 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71028606-dbd8-4bc3-902a-2c56f89e2341-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.382968 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klppm\" (UniqueName: \"kubernetes.io/projected/71028606-dbd8-4bc3-902a-2c56f89e2341-kube-api-access-klppm\") on node \"crc\" DevicePath \"\"" Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.382981 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71028606-dbd8-4bc3-902a-2c56f89e2341-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.758451 4680 generic.go:334] "Generic (PLEG): container finished" podID="71028606-dbd8-4bc3-902a-2c56f89e2341" containerID="9570fcfe1550818d4bda257a00c2a25451405fd2ef5395bac2e034b2e9c270d3" exitCode=0 Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.758508 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xt4mt" event={"ID":"71028606-dbd8-4bc3-902a-2c56f89e2341","Type":"ContainerDied","Data":"9570fcfe1550818d4bda257a00c2a25451405fd2ef5395bac2e034b2e9c270d3"} Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.758527 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xt4mt" Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.758541 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xt4mt" event={"ID":"71028606-dbd8-4bc3-902a-2c56f89e2341","Type":"ContainerDied","Data":"146170fa800c65a376f640ecb09f2e4a8cb273a891dc7e83bccc78b139e438df"} Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.758567 4680 scope.go:117] "RemoveContainer" containerID="9570fcfe1550818d4bda257a00c2a25451405fd2ef5395bac2e034b2e9c270d3" Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.777634 4680 scope.go:117] "RemoveContainer" containerID="7a6e0dcf906e99f91303aebc41f2a5070e63ac3f2f09c7cfd6fcdb2f03d60509" Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.808297 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xt4mt"] Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.815273 4680 scope.go:117] "RemoveContainer" containerID="5d08a9bdb2d31506186920294b2b5012e440df35db642ec1c8a4042776c75500" Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.816451 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xt4mt"] Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.854184 4680 scope.go:117] "RemoveContainer" containerID="9570fcfe1550818d4bda257a00c2a25451405fd2ef5395bac2e034b2e9c270d3" Feb 17 18:58:53 crc kubenswrapper[4680]: E0217 18:58:53.854640 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9570fcfe1550818d4bda257a00c2a25451405fd2ef5395bac2e034b2e9c270d3\": container with ID starting with 9570fcfe1550818d4bda257a00c2a25451405fd2ef5395bac2e034b2e9c270d3 not found: ID does not exist" containerID="9570fcfe1550818d4bda257a00c2a25451405fd2ef5395bac2e034b2e9c270d3" Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.854676 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9570fcfe1550818d4bda257a00c2a25451405fd2ef5395bac2e034b2e9c270d3"} err="failed to get container status \"9570fcfe1550818d4bda257a00c2a25451405fd2ef5395bac2e034b2e9c270d3\": rpc error: code = NotFound desc = could not find container \"9570fcfe1550818d4bda257a00c2a25451405fd2ef5395bac2e034b2e9c270d3\": container with ID starting with 9570fcfe1550818d4bda257a00c2a25451405fd2ef5395bac2e034b2e9c270d3 not found: ID does not exist" Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.854697 4680 scope.go:117] "RemoveContainer" containerID="7a6e0dcf906e99f91303aebc41f2a5070e63ac3f2f09c7cfd6fcdb2f03d60509" Feb 17 18:58:53 crc kubenswrapper[4680]: E0217 18:58:53.854991 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a6e0dcf906e99f91303aebc41f2a5070e63ac3f2f09c7cfd6fcdb2f03d60509\": container with ID starting with 7a6e0dcf906e99f91303aebc41f2a5070e63ac3f2f09c7cfd6fcdb2f03d60509 not found: ID does not exist" containerID="7a6e0dcf906e99f91303aebc41f2a5070e63ac3f2f09c7cfd6fcdb2f03d60509" Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.855013 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a6e0dcf906e99f91303aebc41f2a5070e63ac3f2f09c7cfd6fcdb2f03d60509"} err="failed to get container status \"7a6e0dcf906e99f91303aebc41f2a5070e63ac3f2f09c7cfd6fcdb2f03d60509\": rpc error: code = NotFound desc = could not find container \"7a6e0dcf906e99f91303aebc41f2a5070e63ac3f2f09c7cfd6fcdb2f03d60509\": container with ID starting with 7a6e0dcf906e99f91303aebc41f2a5070e63ac3f2f09c7cfd6fcdb2f03d60509 not found: ID does not exist" Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.855025 4680 scope.go:117] "RemoveContainer" containerID="5d08a9bdb2d31506186920294b2b5012e440df35db642ec1c8a4042776c75500" Feb 17 18:58:53 crc kubenswrapper[4680]: E0217 18:58:53.855350 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d08a9bdb2d31506186920294b2b5012e440df35db642ec1c8a4042776c75500\": container with ID starting with 5d08a9bdb2d31506186920294b2b5012e440df35db642ec1c8a4042776c75500 not found: ID does not exist" containerID="5d08a9bdb2d31506186920294b2b5012e440df35db642ec1c8a4042776c75500" Feb 17 18:58:53 crc kubenswrapper[4680]: I0217 18:58:53.855369 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d08a9bdb2d31506186920294b2b5012e440df35db642ec1c8a4042776c75500"} err="failed to get container status \"5d08a9bdb2d31506186920294b2b5012e440df35db642ec1c8a4042776c75500\": rpc error: code = NotFound desc = could not find container \"5d08a9bdb2d31506186920294b2b5012e440df35db642ec1c8a4042776c75500\": container with ID starting with 5d08a9bdb2d31506186920294b2b5012e440df35db642ec1c8a4042776c75500 not found: ID does not exist" Feb 17 18:58:55 crc kubenswrapper[4680]: I0217 18:58:55.115463 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71028606-dbd8-4bc3-902a-2c56f89e2341" path="/var/lib/kubelet/pods/71028606-dbd8-4bc3-902a-2c56f89e2341/volumes" Feb 17 18:59:04 crc kubenswrapper[4680]: I0217 18:59:04.443702 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-bqn9b_d11f19f3-34a3-42d7-b206-4c4377f903c1/nmstate-console-plugin/0.log" Feb 17 18:59:04 crc kubenswrapper[4680]: I0217 18:59:04.487810 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-nh5lq_853da1fb-12a1-4f09-969f-9e8e58ebafc1/nmstate-handler/0.log" Feb 17 18:59:04 crc kubenswrapper[4680]: I0217 18:59:04.694769 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-88gd2_2e5d7226-10e8-4b3f-b5ef-3d976c1f8462/kube-rbac-proxy/0.log" Feb 17 18:59:04 crc kubenswrapper[4680]: I0217 18:59:04.695133 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-88gd2_2e5d7226-10e8-4b3f-b5ef-3d976c1f8462/nmstate-metrics/0.log" Feb 17 18:59:04 crc kubenswrapper[4680]: I0217 18:59:04.853182 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-nhzd8_5b484a76-ef98-43ef-9c15-0d2f86ce23a6/nmstate-operator/0.log" Feb 17 18:59:04 crc kubenswrapper[4680]: I0217 18:59:04.952645 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-hffc9_35eb3c36-116b-4a21-9d5f-0a4c28011eb4/nmstate-webhook/0.log" Feb 17 18:59:06 crc kubenswrapper[4680]: I0217 18:59:06.106075 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:59:06 crc kubenswrapper[4680]: E0217 18:59:06.106595 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:59:20 crc kubenswrapper[4680]: I0217 18:59:20.105291 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:59:20 crc kubenswrapper[4680]: E0217 18:59:20.106127 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:59:33 crc kubenswrapper[4680]: I0217 18:59:33.104849 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:59:33 crc kubenswrapper[4680]: E0217 18:59:33.105562 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:59:33 crc kubenswrapper[4680]: I0217 18:59:33.310504 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-fnhv4_c0f03e41-ea1c-4120-89b6-13445eeb9fd6/kube-rbac-proxy/0.log" Feb 17 18:59:33 crc kubenswrapper[4680]: I0217 18:59:33.448139 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-fnhv4_c0f03e41-ea1c-4120-89b6-13445eeb9fd6/controller/0.log" Feb 17 18:59:33 crc kubenswrapper[4680]: I0217 18:59:33.570797 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-frr-files/0.log" Feb 17 18:59:33 crc kubenswrapper[4680]: I0217 18:59:33.771553 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-reloader/0.log" Feb 17 18:59:33 crc kubenswrapper[4680]: I0217 18:59:33.785513 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-reloader/0.log" Feb 17 18:59:33 crc kubenswrapper[4680]: I0217 18:59:33.808139 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-frr-files/0.log" Feb 17 18:59:33 crc kubenswrapper[4680]: I0217 18:59:33.871551 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-metrics/0.log" Feb 17 18:59:34 crc kubenswrapper[4680]: I0217 18:59:34.093355 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-frr-files/0.log" Feb 17 18:59:34 crc kubenswrapper[4680]: I0217 18:59:34.120685 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-metrics/0.log" Feb 17 18:59:34 crc kubenswrapper[4680]: I0217 18:59:34.138703 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-reloader/0.log" Feb 17 18:59:34 crc kubenswrapper[4680]: I0217 18:59:34.156739 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-metrics/0.log" Feb 17 18:59:34 crc kubenswrapper[4680]: I0217 18:59:34.316818 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-frr-files/0.log" Feb 17 18:59:34 crc kubenswrapper[4680]: I0217 18:59:34.355738 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/controller/0.log" Feb 17 18:59:34 crc kubenswrapper[4680]: I0217 18:59:34.376031 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-reloader/0.log" Feb 17 18:59:34 crc kubenswrapper[4680]: I0217 18:59:34.389149 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-metrics/0.log" Feb 17 18:59:34 crc kubenswrapper[4680]: I0217 18:59:34.670064 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/kube-rbac-proxy-frr/0.log" Feb 17 18:59:34 crc kubenswrapper[4680]: I0217 18:59:34.677760 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/frr-metrics/0.log" Feb 17 18:59:34 crc kubenswrapper[4680]: I0217 18:59:34.776448 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/kube-rbac-proxy/0.log" Feb 17 18:59:35 crc kubenswrapper[4680]: I0217 18:59:35.166868 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/reloader/0.log" Feb 17 18:59:35 crc kubenswrapper[4680]: I0217 18:59:35.747001 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-w72g5_65241e32-d4f0-4771-b88c-97d3750130e9/frr-k8s-webhook-server/0.log" Feb 17 18:59:35 crc kubenswrapper[4680]: I0217 18:59:35.854781 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-847c44f7f8-ct2vk_bcb1b2bb-a7d3-41f4-a898-c563fceb659f/manager/0.log" Feb 17 18:59:36 crc kubenswrapper[4680]: I0217 18:59:36.119555 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/frr/0.log" Feb 17 18:59:36 crc kubenswrapper[4680]: I0217 18:59:36.189015 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-c4d794488-wgkbr_07691e4d-6667-4947-a3e3-de4d4d403aa5/webhook-server/0.log" Feb 17 18:59:36 crc kubenswrapper[4680]: I0217 18:59:36.352927 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wjjw2_4bbcd402-b03a-45bc-b343-95f81671cb35/kube-rbac-proxy/0.log" Feb 17 18:59:36 crc kubenswrapper[4680]: I0217 18:59:36.649832 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wjjw2_4bbcd402-b03a-45bc-b343-95f81671cb35/speaker/0.log" Feb 17 18:59:44 crc kubenswrapper[4680]: I0217 18:59:44.105117 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:59:44 crc kubenswrapper[4680]: E0217 18:59:44.105956 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:59:51 crc kubenswrapper[4680]: I0217 18:59:51.456149 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk_9c7369ac-8367-419f-b50f-82d0ba1cef96/util/0.log" Feb 17 18:59:51 crc kubenswrapper[4680]: I0217 18:59:51.612814 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk_9c7369ac-8367-419f-b50f-82d0ba1cef96/pull/0.log" Feb 17 18:59:51 crc kubenswrapper[4680]: I0217 18:59:51.678478 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk_9c7369ac-8367-419f-b50f-82d0ba1cef96/pull/0.log" Feb 17 18:59:51 crc kubenswrapper[4680]: I0217 18:59:51.699758 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk_9c7369ac-8367-419f-b50f-82d0ba1cef96/util/0.log" Feb 17 18:59:51 crc kubenswrapper[4680]: I0217 18:59:51.867476 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk_9c7369ac-8367-419f-b50f-82d0ba1cef96/pull/0.log" Feb 17 18:59:51 crc kubenswrapper[4680]: I0217 18:59:51.903354 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk_9c7369ac-8367-419f-b50f-82d0ba1cef96/util/0.log" Feb 17 18:59:51 crc kubenswrapper[4680]: I0217 18:59:51.984909 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk_9c7369ac-8367-419f-b50f-82d0ba1cef96/extract/0.log" Feb 17 18:59:52 crc kubenswrapper[4680]: I0217 18:59:52.191837 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b96s8_88dae8bc-f19a-4b03-9443-19af5340b54e/extract-utilities/0.log" Feb 17 18:59:52 crc kubenswrapper[4680]: I0217 18:59:52.411974 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b96s8_88dae8bc-f19a-4b03-9443-19af5340b54e/extract-content/0.log" Feb 17 18:59:52 crc kubenswrapper[4680]: I0217 18:59:52.460497 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b96s8_88dae8bc-f19a-4b03-9443-19af5340b54e/extract-utilities/0.log" Feb 17 18:59:52 crc kubenswrapper[4680]: I0217 18:59:52.498636 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b96s8_88dae8bc-f19a-4b03-9443-19af5340b54e/extract-content/0.log" Feb 17 18:59:53 crc kubenswrapper[4680]: I0217 18:59:53.027544 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b96s8_88dae8bc-f19a-4b03-9443-19af5340b54e/extract-utilities/0.log" Feb 17 18:59:53 crc kubenswrapper[4680]: I0217 18:59:53.097505 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b96s8_88dae8bc-f19a-4b03-9443-19af5340b54e/extract-content/0.log" Feb 17 18:59:53 crc kubenswrapper[4680]: I0217 18:59:53.400487 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-px9ln_3bb4764c-4bf2-46c1-a940-0aa75a205881/extract-utilities/0.log" Feb 17 18:59:53 crc kubenswrapper[4680]: I0217 18:59:53.472450 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b96s8_88dae8bc-f19a-4b03-9443-19af5340b54e/registry-server/0.log" Feb 17 18:59:53 crc kubenswrapper[4680]: I0217 18:59:53.617426 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-px9ln_3bb4764c-4bf2-46c1-a940-0aa75a205881/extract-utilities/0.log" Feb 17 18:59:53 crc kubenswrapper[4680]: I0217 18:59:53.625459 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-px9ln_3bb4764c-4bf2-46c1-a940-0aa75a205881/extract-content/0.log" Feb 17 18:59:53 crc kubenswrapper[4680]: I0217 18:59:53.686023 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-px9ln_3bb4764c-4bf2-46c1-a940-0aa75a205881/extract-content/0.log" Feb 17 18:59:53 crc kubenswrapper[4680]: I0217 18:59:53.840261 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-px9ln_3bb4764c-4bf2-46c1-a940-0aa75a205881/extract-content/0.log" Feb 17 18:59:53 crc kubenswrapper[4680]: I0217 18:59:53.841543 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-px9ln_3bb4764c-4bf2-46c1-a940-0aa75a205881/extract-utilities/0.log" Feb 17 18:59:54 crc kubenswrapper[4680]: I0217 18:59:54.209240 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j_41e38977-61de-46c0-8a84-9cf4de5d5156/util/0.log" Feb 17 18:59:54 crc kubenswrapper[4680]: I0217 18:59:54.334975 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-px9ln_3bb4764c-4bf2-46c1-a940-0aa75a205881/registry-server/0.log" Feb 17 18:59:54 crc kubenswrapper[4680]: I0217 18:59:54.713878 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j_41e38977-61de-46c0-8a84-9cf4de5d5156/pull/0.log" Feb 17 18:59:54 crc kubenswrapper[4680]: I0217 18:59:54.733429 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j_41e38977-61de-46c0-8a84-9cf4de5d5156/util/0.log" Feb 17 18:59:54 crc kubenswrapper[4680]: I0217 18:59:54.882183 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j_41e38977-61de-46c0-8a84-9cf4de5d5156/pull/0.log" Feb 17 18:59:54 crc kubenswrapper[4680]: I0217 18:59:54.985833 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j_41e38977-61de-46c0-8a84-9cf4de5d5156/extract/0.log" Feb 17 18:59:55 crc kubenswrapper[4680]: I0217 18:59:55.031050 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j_41e38977-61de-46c0-8a84-9cf4de5d5156/util/0.log" Feb 17 18:59:55 crc kubenswrapper[4680]: I0217 18:59:55.047094 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j_41e38977-61de-46c0-8a84-9cf4de5d5156/pull/0.log" Feb 17 18:59:55 crc kubenswrapper[4680]: I0217 18:59:55.205019 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-slcqr_b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a/marketplace-operator/0.log" Feb 17 18:59:55 crc kubenswrapper[4680]: I0217 18:59:55.338293 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6lssp_2660862d-dbb8-465e-a47d-91dab06d81cb/extract-utilities/0.log" Feb 17 18:59:55 crc kubenswrapper[4680]: I0217 18:59:55.517295 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6lssp_2660862d-dbb8-465e-a47d-91dab06d81cb/extract-utilities/0.log" Feb 17 18:59:55 crc kubenswrapper[4680]: I0217 18:59:55.518358 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6lssp_2660862d-dbb8-465e-a47d-91dab06d81cb/extract-content/0.log" Feb 17 18:59:55 crc kubenswrapper[4680]: I0217 18:59:55.552111 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6lssp_2660862d-dbb8-465e-a47d-91dab06d81cb/extract-content/0.log" Feb 17 18:59:55 crc kubenswrapper[4680]: I0217 18:59:55.743580 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6lssp_2660862d-dbb8-465e-a47d-91dab06d81cb/extract-content/0.log" Feb 17 18:59:55 crc kubenswrapper[4680]: I0217 18:59:55.807196 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6lssp_2660862d-dbb8-465e-a47d-91dab06d81cb/extract-utilities/0.log" Feb 17 18:59:55 crc kubenswrapper[4680]: I0217 18:59:55.969752 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6lssp_2660862d-dbb8-465e-a47d-91dab06d81cb/registry-server/0.log" Feb 17 18:59:56 crc kubenswrapper[4680]: I0217 18:59:56.040812 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cc5x2_36277038-c2ae-486f-aa6d-ac179eb779a5/extract-utilities/0.log" Feb 17 18:59:56 crc kubenswrapper[4680]: I0217 18:59:56.105578 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 18:59:56 crc kubenswrapper[4680]: E0217 18:59:56.105818 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 18:59:56 crc kubenswrapper[4680]: I0217 18:59:56.171481 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cc5x2_36277038-c2ae-486f-aa6d-ac179eb779a5/extract-utilities/0.log" Feb 17 18:59:56 crc kubenswrapper[4680]: I0217 18:59:56.207809 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cc5x2_36277038-c2ae-486f-aa6d-ac179eb779a5/extract-content/0.log" Feb 17 18:59:56 crc kubenswrapper[4680]: I0217 18:59:56.225700 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cc5x2_36277038-c2ae-486f-aa6d-ac179eb779a5/extract-content/0.log" Feb 17 18:59:56 crc kubenswrapper[4680]: I0217 18:59:56.418727 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cc5x2_36277038-c2ae-486f-aa6d-ac179eb779a5/extract-utilities/0.log" Feb 17 18:59:56 crc kubenswrapper[4680]: I0217 18:59:56.437030 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cc5x2_36277038-c2ae-486f-aa6d-ac179eb779a5/extract-content/0.log" Feb 17 18:59:56 crc kubenswrapper[4680]: I0217 18:59:56.866918 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cc5x2_36277038-c2ae-486f-aa6d-ac179eb779a5/registry-server/0.log" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.161234 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28"] Feb 17 19:00:00 crc kubenswrapper[4680]: E0217 19:00:00.162223 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4344c10a-b977-4519-baa6-27a6329da4de" containerName="registry-server" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.162240 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4344c10a-b977-4519-baa6-27a6329da4de" containerName="registry-server" Feb 17 19:00:00 crc kubenswrapper[4680]: E0217 19:00:00.162263 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4344c10a-b977-4519-baa6-27a6329da4de" containerName="extract-content" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.162271 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4344c10a-b977-4519-baa6-27a6329da4de" containerName="extract-content" Feb 17 19:00:00 crc kubenswrapper[4680]: E0217 19:00:00.162287 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71028606-dbd8-4bc3-902a-2c56f89e2341" containerName="extract-utilities" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.162295 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="71028606-dbd8-4bc3-902a-2c56f89e2341" containerName="extract-utilities" Feb 17 19:00:00 crc kubenswrapper[4680]: E0217 19:00:00.162309 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71028606-dbd8-4bc3-902a-2c56f89e2341" containerName="extract-content" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.162334 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="71028606-dbd8-4bc3-902a-2c56f89e2341" containerName="extract-content" Feb 17 19:00:00 crc kubenswrapper[4680]: E0217 19:00:00.162346 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71028606-dbd8-4bc3-902a-2c56f89e2341" containerName="registry-server" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.162353 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="71028606-dbd8-4bc3-902a-2c56f89e2341" containerName="registry-server" Feb 17 19:00:00 crc kubenswrapper[4680]: E0217 19:00:00.162373 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4344c10a-b977-4519-baa6-27a6329da4de" containerName="extract-utilities" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.162380 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="4344c10a-b977-4519-baa6-27a6329da4de" containerName="extract-utilities" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.162656 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="4344c10a-b977-4519-baa6-27a6329da4de" containerName="registry-server" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.162678 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="71028606-dbd8-4bc3-902a-2c56f89e2341" containerName="registry-server" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.163548 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.165812 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.166098 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.177268 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28"] Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.266311 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fdceacc-643b-4052-9be0-44d1375bab65-config-volume\") pod \"collect-profiles-29522580-s2l28\" (UID: \"0fdceacc-643b-4052-9be0-44d1375bab65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.267081 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dn62\" (UniqueName: \"kubernetes.io/projected/0fdceacc-643b-4052-9be0-44d1375bab65-kube-api-access-5dn62\") pod \"collect-profiles-29522580-s2l28\" (UID: \"0fdceacc-643b-4052-9be0-44d1375bab65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.267164 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fdceacc-643b-4052-9be0-44d1375bab65-secret-volume\") pod \"collect-profiles-29522580-s2l28\" (UID: \"0fdceacc-643b-4052-9be0-44d1375bab65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.368742 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dn62\" (UniqueName: \"kubernetes.io/projected/0fdceacc-643b-4052-9be0-44d1375bab65-kube-api-access-5dn62\") pod \"collect-profiles-29522580-s2l28\" (UID: \"0fdceacc-643b-4052-9be0-44d1375bab65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.368829 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fdceacc-643b-4052-9be0-44d1375bab65-secret-volume\") pod \"collect-profiles-29522580-s2l28\" (UID: \"0fdceacc-643b-4052-9be0-44d1375bab65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.368882 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fdceacc-643b-4052-9be0-44d1375bab65-config-volume\") pod \"collect-profiles-29522580-s2l28\" (UID: \"0fdceacc-643b-4052-9be0-44d1375bab65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.370169 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fdceacc-643b-4052-9be0-44d1375bab65-config-volume\") pod \"collect-profiles-29522580-s2l28\" (UID: \"0fdceacc-643b-4052-9be0-44d1375bab65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.516216 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fdceacc-643b-4052-9be0-44d1375bab65-secret-volume\") pod \"collect-profiles-29522580-s2l28\" (UID: \"0fdceacc-643b-4052-9be0-44d1375bab65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.516446 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dn62\" (UniqueName: \"kubernetes.io/projected/0fdceacc-643b-4052-9be0-44d1375bab65-kube-api-access-5dn62\") pod \"collect-profiles-29522580-s2l28\" (UID: \"0fdceacc-643b-4052-9be0-44d1375bab65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28" Feb 17 19:00:00 crc kubenswrapper[4680]: I0217 19:00:00.789263 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28" Feb 17 19:00:01 crc kubenswrapper[4680]: I0217 19:00:01.245193 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28"] Feb 17 19:00:01 crc kubenswrapper[4680]: I0217 19:00:01.351027 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28" event={"ID":"0fdceacc-643b-4052-9be0-44d1375bab65","Type":"ContainerStarted","Data":"9f017613bca9255b67281ad490297f3c4271af509ef33a12b042d557712f58bc"} Feb 17 19:00:02 crc kubenswrapper[4680]: I0217 19:00:02.360763 4680 generic.go:334] "Generic (PLEG): container finished" podID="0fdceacc-643b-4052-9be0-44d1375bab65" containerID="b1774dd116f1a179f1d434ffbdc51e61e3a16baea6801d1fa7cd9dcf1ce8d00d" exitCode=0 Feb 17 19:00:02 crc kubenswrapper[4680]: I0217 19:00:02.361002 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28" event={"ID":"0fdceacc-643b-4052-9be0-44d1375bab65","Type":"ContainerDied","Data":"b1774dd116f1a179f1d434ffbdc51e61e3a16baea6801d1fa7cd9dcf1ce8d00d"} Feb 17 19:00:03 crc kubenswrapper[4680]: I0217 19:00:03.735946 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28" Feb 17 19:00:03 crc kubenswrapper[4680]: I0217 19:00:03.829568 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fdceacc-643b-4052-9be0-44d1375bab65-secret-volume\") pod \"0fdceacc-643b-4052-9be0-44d1375bab65\" (UID: \"0fdceacc-643b-4052-9be0-44d1375bab65\") " Feb 17 19:00:03 crc kubenswrapper[4680]: I0217 19:00:03.829672 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dn62\" (UniqueName: \"kubernetes.io/projected/0fdceacc-643b-4052-9be0-44d1375bab65-kube-api-access-5dn62\") pod \"0fdceacc-643b-4052-9be0-44d1375bab65\" (UID: \"0fdceacc-643b-4052-9be0-44d1375bab65\") " Feb 17 19:00:03 crc kubenswrapper[4680]: I0217 19:00:03.829963 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fdceacc-643b-4052-9be0-44d1375bab65-config-volume\") pod \"0fdceacc-643b-4052-9be0-44d1375bab65\" (UID: \"0fdceacc-643b-4052-9be0-44d1375bab65\") " Feb 17 19:00:03 crc kubenswrapper[4680]: I0217 19:00:03.830484 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fdceacc-643b-4052-9be0-44d1375bab65-config-volume" (OuterVolumeSpecName: "config-volume") pod "0fdceacc-643b-4052-9be0-44d1375bab65" (UID: "0fdceacc-643b-4052-9be0-44d1375bab65"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 19:00:03 crc kubenswrapper[4680]: I0217 19:00:03.846549 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fdceacc-643b-4052-9be0-44d1375bab65-kube-api-access-5dn62" (OuterVolumeSpecName: "kube-api-access-5dn62") pod "0fdceacc-643b-4052-9be0-44d1375bab65" (UID: "0fdceacc-643b-4052-9be0-44d1375bab65"). InnerVolumeSpecName "kube-api-access-5dn62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 19:00:03 crc kubenswrapper[4680]: I0217 19:00:03.856133 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdceacc-643b-4052-9be0-44d1375bab65-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0fdceacc-643b-4052-9be0-44d1375bab65" (UID: "0fdceacc-643b-4052-9be0-44d1375bab65"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 19:00:03 crc kubenswrapper[4680]: I0217 19:00:03.932139 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fdceacc-643b-4052-9be0-44d1375bab65-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 19:00:03 crc kubenswrapper[4680]: I0217 19:00:03.932170 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fdceacc-643b-4052-9be0-44d1375bab65-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 19:00:03 crc kubenswrapper[4680]: I0217 19:00:03.932180 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dn62\" (UniqueName: \"kubernetes.io/projected/0fdceacc-643b-4052-9be0-44d1375bab65-kube-api-access-5dn62\") on node \"crc\" DevicePath \"\"" Feb 17 19:00:04 crc kubenswrapper[4680]: I0217 19:00:04.376422 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28" event={"ID":"0fdceacc-643b-4052-9be0-44d1375bab65","Type":"ContainerDied","Data":"9f017613bca9255b67281ad490297f3c4271af509ef33a12b042d557712f58bc"} Feb 17 19:00:04 crc kubenswrapper[4680]: I0217 19:00:04.376464 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f017613bca9255b67281ad490297f3c4271af509ef33a12b042d557712f58bc" Feb 17 19:00:04 crc kubenswrapper[4680]: I0217 19:00:04.376506 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522580-s2l28" Feb 17 19:00:04 crc kubenswrapper[4680]: I0217 19:00:04.822446 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc"] Feb 17 19:00:04 crc kubenswrapper[4680]: I0217 19:00:04.829880 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522535-s5qwc"] Feb 17 19:00:05 crc kubenswrapper[4680]: I0217 19:00:05.121245 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd5f3080-2ae7-4012-8230-fc007a1a1b8f" path="/var/lib/kubelet/pods/dd5f3080-2ae7-4012-8230-fc007a1a1b8f/volumes" Feb 17 19:00:09 crc kubenswrapper[4680]: I0217 19:00:09.105131 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 19:00:09 crc kubenswrapper[4680]: E0217 19:00:09.105800 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:00:23 crc kubenswrapper[4680]: I0217 19:00:23.105161 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 19:00:23 crc kubenswrapper[4680]: E0217 19:00:23.106904 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:00:38 crc kubenswrapper[4680]: I0217 19:00:38.105864 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 19:00:38 crc kubenswrapper[4680]: E0217 19:00:38.106603 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:00:49 crc kubenswrapper[4680]: I0217 19:00:49.105498 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 19:00:49 crc kubenswrapper[4680]: E0217 19:00:49.106245 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:00:56 crc kubenswrapper[4680]: I0217 19:00:56.543495 4680 scope.go:117] "RemoveContainer" containerID="c47a0d8b2b835691675b0d91faf02cafbd3af9433ad65964ec55f21bdbad9997" Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.181404 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29522581-mtv5h"] Feb 17 19:01:00 crc kubenswrapper[4680]: E0217 19:01:00.183146 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdceacc-643b-4052-9be0-44d1375bab65" containerName="collect-profiles" Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.183163 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdceacc-643b-4052-9be0-44d1375bab65" containerName="collect-profiles" Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.183383 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdceacc-643b-4052-9be0-44d1375bab65" containerName="collect-profiles" Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.184004 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522581-mtv5h" Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.217050 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522581-mtv5h"] Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.264093 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-combined-ca-bundle\") pod \"keystone-cron-29522581-mtv5h\" (UID: \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\") " pod="openstack/keystone-cron-29522581-mtv5h" Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.264204 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpwrg\" (UniqueName: \"kubernetes.io/projected/e3138dae-021c-4ebf-8ebd-81aed0fc0499-kube-api-access-dpwrg\") pod \"keystone-cron-29522581-mtv5h\" (UID: \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\") " pod="openstack/keystone-cron-29522581-mtv5h" Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.264254 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-config-data\") pod \"keystone-cron-29522581-mtv5h\" (UID: \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\") " pod="openstack/keystone-cron-29522581-mtv5h" Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.264428 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-fernet-keys\") pod \"keystone-cron-29522581-mtv5h\" (UID: \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\") " pod="openstack/keystone-cron-29522581-mtv5h" Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.369328 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-combined-ca-bundle\") pod \"keystone-cron-29522581-mtv5h\" (UID: \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\") " pod="openstack/keystone-cron-29522581-mtv5h" Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.369432 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpwrg\" (UniqueName: \"kubernetes.io/projected/e3138dae-021c-4ebf-8ebd-81aed0fc0499-kube-api-access-dpwrg\") pod \"keystone-cron-29522581-mtv5h\" (UID: \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\") " pod="openstack/keystone-cron-29522581-mtv5h" Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.369477 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-config-data\") pod \"keystone-cron-29522581-mtv5h\" (UID: \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\") " pod="openstack/keystone-cron-29522581-mtv5h" Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.369535 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-fernet-keys\") pod \"keystone-cron-29522581-mtv5h\" (UID: \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\") " pod="openstack/keystone-cron-29522581-mtv5h" Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.382119 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-combined-ca-bundle\") pod \"keystone-cron-29522581-mtv5h\" (UID: \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\") " pod="openstack/keystone-cron-29522581-mtv5h" Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.387096 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-fernet-keys\") pod \"keystone-cron-29522581-mtv5h\" (UID: \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\") " pod="openstack/keystone-cron-29522581-mtv5h" Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.389176 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-config-data\") pod \"keystone-cron-29522581-mtv5h\" (UID: \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\") " pod="openstack/keystone-cron-29522581-mtv5h" Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.431201 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpwrg\" (UniqueName: \"kubernetes.io/projected/e3138dae-021c-4ebf-8ebd-81aed0fc0499-kube-api-access-dpwrg\") pod \"keystone-cron-29522581-mtv5h\" (UID: \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\") " pod="openstack/keystone-cron-29522581-mtv5h" Feb 17 19:01:00 crc kubenswrapper[4680]: I0217 19:01:00.557150 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522581-mtv5h" Feb 17 19:01:01 crc kubenswrapper[4680]: I0217 19:01:01.073662 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522581-mtv5h"] Feb 17 19:01:01 crc kubenswrapper[4680]: I0217 19:01:01.917417 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522581-mtv5h" event={"ID":"e3138dae-021c-4ebf-8ebd-81aed0fc0499","Type":"ContainerStarted","Data":"84189da33d8771fa8520f29540a76b172d7a95d6a942e1b2d97fe94ebeee84c1"} Feb 17 19:01:01 crc kubenswrapper[4680]: I0217 19:01:01.917689 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522581-mtv5h" event={"ID":"e3138dae-021c-4ebf-8ebd-81aed0fc0499","Type":"ContainerStarted","Data":"7e0fe2668a094d03cff402e8d4fface1dd55e5d5ecf6c53a8dce23008cc0023b"} Feb 17 19:01:01 crc kubenswrapper[4680]: I0217 19:01:01.942056 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29522581-mtv5h" podStartSLOduration=1.942036241 podStartE2EDuration="1.942036241s" podCreationTimestamp="2026-02-17 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 19:01:01.930739883 +0000 UTC m=+4591.481638572" watchObservedRunningTime="2026-02-17 19:01:01.942036241 +0000 UTC m=+4591.492934910" Feb 17 19:01:02 crc kubenswrapper[4680]: I0217 19:01:02.105466 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 19:01:02 crc kubenswrapper[4680]: E0217 19:01:02.105694 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:01:03 crc kubenswrapper[4680]: I0217 19:01:03.935163 4680 generic.go:334] "Generic (PLEG): container finished" podID="e3138dae-021c-4ebf-8ebd-81aed0fc0499" containerID="84189da33d8771fa8520f29540a76b172d7a95d6a942e1b2d97fe94ebeee84c1" exitCode=0 Feb 17 19:01:03 crc kubenswrapper[4680]: I0217 19:01:03.935263 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522581-mtv5h" event={"ID":"e3138dae-021c-4ebf-8ebd-81aed0fc0499","Type":"ContainerDied","Data":"84189da33d8771fa8520f29540a76b172d7a95d6a942e1b2d97fe94ebeee84c1"} Feb 17 19:01:05 crc kubenswrapper[4680]: I0217 19:01:05.314303 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522581-mtv5h" Feb 17 19:01:05 crc kubenswrapper[4680]: I0217 19:01:05.385019 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpwrg\" (UniqueName: \"kubernetes.io/projected/e3138dae-021c-4ebf-8ebd-81aed0fc0499-kube-api-access-dpwrg\") pod \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\" (UID: \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\") " Feb 17 19:01:05 crc kubenswrapper[4680]: I0217 19:01:05.385333 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-combined-ca-bundle\") pod \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\" (UID: \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\") " Feb 17 19:01:05 crc kubenswrapper[4680]: I0217 19:01:05.385416 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-config-data\") pod \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\" (UID: \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\") " Feb 17 19:01:05 crc kubenswrapper[4680]: I0217 19:01:05.385467 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-fernet-keys\") pod \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\" (UID: \"e3138dae-021c-4ebf-8ebd-81aed0fc0499\") " Feb 17 19:01:05 crc kubenswrapper[4680]: I0217 19:01:05.394607 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e3138dae-021c-4ebf-8ebd-81aed0fc0499" (UID: "e3138dae-021c-4ebf-8ebd-81aed0fc0499"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 19:01:05 crc kubenswrapper[4680]: I0217 19:01:05.399031 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3138dae-021c-4ebf-8ebd-81aed0fc0499-kube-api-access-dpwrg" (OuterVolumeSpecName: "kube-api-access-dpwrg") pod "e3138dae-021c-4ebf-8ebd-81aed0fc0499" (UID: "e3138dae-021c-4ebf-8ebd-81aed0fc0499"). InnerVolumeSpecName "kube-api-access-dpwrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 19:01:05 crc kubenswrapper[4680]: I0217 19:01:05.439626 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3138dae-021c-4ebf-8ebd-81aed0fc0499" (UID: "e3138dae-021c-4ebf-8ebd-81aed0fc0499"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 19:01:05 crc kubenswrapper[4680]: I0217 19:01:05.460504 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-config-data" (OuterVolumeSpecName: "config-data") pod "e3138dae-021c-4ebf-8ebd-81aed0fc0499" (UID: "e3138dae-021c-4ebf-8ebd-81aed0fc0499"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 19:01:05 crc kubenswrapper[4680]: I0217 19:01:05.488108 4680 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 19:01:05 crc kubenswrapper[4680]: I0217 19:01:05.488538 4680 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 19:01:05 crc kubenswrapper[4680]: I0217 19:01:05.488550 4680 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3138dae-021c-4ebf-8ebd-81aed0fc0499-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 19:01:05 crc kubenswrapper[4680]: I0217 19:01:05.488559 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpwrg\" (UniqueName: \"kubernetes.io/projected/e3138dae-021c-4ebf-8ebd-81aed0fc0499-kube-api-access-dpwrg\") on node \"crc\" DevicePath \"\"" Feb 17 19:01:05 crc kubenswrapper[4680]: I0217 19:01:05.954250 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522581-mtv5h" event={"ID":"e3138dae-021c-4ebf-8ebd-81aed0fc0499","Type":"ContainerDied","Data":"7e0fe2668a094d03cff402e8d4fface1dd55e5d5ecf6c53a8dce23008cc0023b"} Feb 17 19:01:05 crc kubenswrapper[4680]: I0217 19:01:05.954593 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e0fe2668a094d03cff402e8d4fface1dd55e5d5ecf6c53a8dce23008cc0023b" Feb 17 19:01:05 crc kubenswrapper[4680]: I0217 19:01:05.954647 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522581-mtv5h" Feb 17 19:01:14 crc kubenswrapper[4680]: I0217 19:01:14.111673 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 19:01:15 crc kubenswrapper[4680]: I0217 19:01:15.039259 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"beb82b8d29ac7889c1ee8f92291d8e0e0a4700c02477e2b7569bdbf8e3b1e375"} Feb 17 19:01:56 crc kubenswrapper[4680]: I0217 19:01:56.619150 4680 scope.go:117] "RemoveContainer" containerID="06eaa0ccee906935d3448120dc34c47b8163ed502cfe76cc10927f5972fe8e79" Feb 17 19:02:21 crc kubenswrapper[4680]: I0217 19:02:21.792057 4680 generic.go:334] "Generic (PLEG): container finished" podID="c4ba2d8b-b186-4d49-adb1-4765e193e2b9" containerID="b0447f1a5739910acb2b5292456c1bc2cf47998d7e46fb592e22abe4b2455d31" exitCode=0 Feb 17 19:02:21 crc kubenswrapper[4680]: I0217 19:02:21.792160 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jgmsc/must-gather-ktfst" event={"ID":"c4ba2d8b-b186-4d49-adb1-4765e193e2b9","Type":"ContainerDied","Data":"b0447f1a5739910acb2b5292456c1bc2cf47998d7e46fb592e22abe4b2455d31"} Feb 17 19:02:21 crc kubenswrapper[4680]: I0217 19:02:21.793475 4680 scope.go:117] "RemoveContainer" containerID="b0447f1a5739910acb2b5292456c1bc2cf47998d7e46fb592e22abe4b2455d31" Feb 17 19:02:22 crc kubenswrapper[4680]: I0217 19:02:22.252786 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jgmsc_must-gather-ktfst_c4ba2d8b-b186-4d49-adb1-4765e193e2b9/gather/0.log" Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.145105 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jgmsc/must-gather-ktfst"] Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.145886 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jgmsc/must-gather-ktfst" podUID="c4ba2d8b-b186-4d49-adb1-4765e193e2b9" containerName="copy" containerID="cri-o://7caccfabb95521042eb8da64057fb2f2377ee36c0d2f7ec34e2bc3a2285c6b70" gracePeriod=2 Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.153470 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jgmsc/must-gather-ktfst"] Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.666346 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jgmsc_must-gather-ktfst_c4ba2d8b-b186-4d49-adb1-4765e193e2b9/copy/0.log" Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.667253 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgmsc/must-gather-ktfst" Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.759540 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prs2f\" (UniqueName: \"kubernetes.io/projected/c4ba2d8b-b186-4d49-adb1-4765e193e2b9-kube-api-access-prs2f\") pod \"c4ba2d8b-b186-4d49-adb1-4765e193e2b9\" (UID: \"c4ba2d8b-b186-4d49-adb1-4765e193e2b9\") " Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.759757 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c4ba2d8b-b186-4d49-adb1-4765e193e2b9-must-gather-output\") pod \"c4ba2d8b-b186-4d49-adb1-4765e193e2b9\" (UID: \"c4ba2d8b-b186-4d49-adb1-4765e193e2b9\") " Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.765343 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4ba2d8b-b186-4d49-adb1-4765e193e2b9-kube-api-access-prs2f" (OuterVolumeSpecName: "kube-api-access-prs2f") pod "c4ba2d8b-b186-4d49-adb1-4765e193e2b9" (UID: "c4ba2d8b-b186-4d49-adb1-4765e193e2b9"). InnerVolumeSpecName "kube-api-access-prs2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.862713 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prs2f\" (UniqueName: \"kubernetes.io/projected/c4ba2d8b-b186-4d49-adb1-4765e193e2b9-kube-api-access-prs2f\") on node \"crc\" DevicePath \"\"" Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.873635 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jgmsc_must-gather-ktfst_c4ba2d8b-b186-4d49-adb1-4765e193e2b9/copy/0.log" Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.874456 4680 generic.go:334] "Generic (PLEG): container finished" podID="c4ba2d8b-b186-4d49-adb1-4765e193e2b9" containerID="7caccfabb95521042eb8da64057fb2f2377ee36c0d2f7ec34e2bc3a2285c6b70" exitCode=143 Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.874526 4680 scope.go:117] "RemoveContainer" containerID="7caccfabb95521042eb8da64057fb2f2377ee36c0d2f7ec34e2bc3a2285c6b70" Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.874553 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jgmsc/must-gather-ktfst" Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.893128 4680 scope.go:117] "RemoveContainer" containerID="b0447f1a5739910acb2b5292456c1bc2cf47998d7e46fb592e22abe4b2455d31" Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.923879 4680 scope.go:117] "RemoveContainer" containerID="7caccfabb95521042eb8da64057fb2f2377ee36c0d2f7ec34e2bc3a2285c6b70" Feb 17 19:02:30 crc kubenswrapper[4680]: E0217 19:02:30.924289 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7caccfabb95521042eb8da64057fb2f2377ee36c0d2f7ec34e2bc3a2285c6b70\": container with ID starting with 7caccfabb95521042eb8da64057fb2f2377ee36c0d2f7ec34e2bc3a2285c6b70 not found: ID does not exist" containerID="7caccfabb95521042eb8da64057fb2f2377ee36c0d2f7ec34e2bc3a2285c6b70" Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.924440 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7caccfabb95521042eb8da64057fb2f2377ee36c0d2f7ec34e2bc3a2285c6b70"} err="failed to get container status \"7caccfabb95521042eb8da64057fb2f2377ee36c0d2f7ec34e2bc3a2285c6b70\": rpc error: code = NotFound desc = could not find container \"7caccfabb95521042eb8da64057fb2f2377ee36c0d2f7ec34e2bc3a2285c6b70\": container with ID starting with 7caccfabb95521042eb8da64057fb2f2377ee36c0d2f7ec34e2bc3a2285c6b70 not found: ID does not exist" Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.924531 4680 scope.go:117] "RemoveContainer" containerID="b0447f1a5739910acb2b5292456c1bc2cf47998d7e46fb592e22abe4b2455d31" Feb 17 19:02:30 crc kubenswrapper[4680]: E0217 19:02:30.924772 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0447f1a5739910acb2b5292456c1bc2cf47998d7e46fb592e22abe4b2455d31\": container with ID starting with b0447f1a5739910acb2b5292456c1bc2cf47998d7e46fb592e22abe4b2455d31 not found: ID does not exist" containerID="b0447f1a5739910acb2b5292456c1bc2cf47998d7e46fb592e22abe4b2455d31" Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.924867 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0447f1a5739910acb2b5292456c1bc2cf47998d7e46fb592e22abe4b2455d31"} err="failed to get container status \"b0447f1a5739910acb2b5292456c1bc2cf47998d7e46fb592e22abe4b2455d31\": rpc error: code = NotFound desc = could not find container \"b0447f1a5739910acb2b5292456c1bc2cf47998d7e46fb592e22abe4b2455d31\": container with ID starting with b0447f1a5739910acb2b5292456c1bc2cf47998d7e46fb592e22abe4b2455d31 not found: ID does not exist" Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.956073 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ba2d8b-b186-4d49-adb1-4765e193e2b9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c4ba2d8b-b186-4d49-adb1-4765e193e2b9" (UID: "c4ba2d8b-b186-4d49-adb1-4765e193e2b9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 19:02:30 crc kubenswrapper[4680]: I0217 19:02:30.963809 4680 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c4ba2d8b-b186-4d49-adb1-4765e193e2b9-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 17 19:02:31 crc kubenswrapper[4680]: I0217 19:02:31.118447 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4ba2d8b-b186-4d49-adb1-4765e193e2b9" path="/var/lib/kubelet/pods/c4ba2d8b-b186-4d49-adb1-4765e193e2b9/volumes" Feb 17 19:02:57 crc kubenswrapper[4680]: I0217 19:02:57.108332 4680 scope.go:117] "RemoveContainer" containerID="1e228a3b19ce194798c8b0d28127575ecdf1f6222816ffec1245b9de5e5f1433" Feb 17 19:03:35 crc kubenswrapper[4680]: I0217 19:03:35.588734 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 19:03:35 crc kubenswrapper[4680]: I0217 19:03:35.589267 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 19:04:05 crc kubenswrapper[4680]: I0217 19:04:05.588869 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 19:04:05 crc kubenswrapper[4680]: I0217 19:04:05.589307 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 19:04:35 crc kubenswrapper[4680]: I0217 19:04:35.589535 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 19:04:35 crc kubenswrapper[4680]: I0217 19:04:35.590499 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 19:04:35 crc kubenswrapper[4680]: I0217 19:04:35.590587 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 19:04:35 crc kubenswrapper[4680]: I0217 19:04:35.591796 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"beb82b8d29ac7889c1ee8f92291d8e0e0a4700c02477e2b7569bdbf8e3b1e375"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 19:04:35 crc kubenswrapper[4680]: I0217 19:04:35.591917 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://beb82b8d29ac7889c1ee8f92291d8e0e0a4700c02477e2b7569bdbf8e3b1e375" gracePeriod=600 Feb 17 19:04:35 crc kubenswrapper[4680]: I0217 19:04:35.946980 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="beb82b8d29ac7889c1ee8f92291d8e0e0a4700c02477e2b7569bdbf8e3b1e375" exitCode=0 Feb 17 19:04:35 crc kubenswrapper[4680]: I0217 19:04:35.947253 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"beb82b8d29ac7889c1ee8f92291d8e0e0a4700c02477e2b7569bdbf8e3b1e375"} Feb 17 19:04:35 crc kubenswrapper[4680]: I0217 19:04:35.947516 4680 scope.go:117] "RemoveContainer" containerID="5878893ca78c00fe85ae7205f751e33002b0f532b3587ba2302097298986a151" Feb 17 19:04:36 crc kubenswrapper[4680]: I0217 19:04:36.959174 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22"} Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.589892 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jjcrx"] Feb 17 19:04:43 crc kubenswrapper[4680]: E0217 19:04:43.593376 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ba2d8b-b186-4d49-adb1-4765e193e2b9" containerName="gather" Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.593412 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ba2d8b-b186-4d49-adb1-4765e193e2b9" containerName="gather" Feb 17 19:04:43 crc kubenswrapper[4680]: E0217 19:04:43.593491 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ba2d8b-b186-4d49-adb1-4765e193e2b9" containerName="copy" Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.593505 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ba2d8b-b186-4d49-adb1-4765e193e2b9" containerName="copy" Feb 17 19:04:43 crc kubenswrapper[4680]: E0217 19:04:43.593528 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3138dae-021c-4ebf-8ebd-81aed0fc0499" containerName="keystone-cron" Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.593541 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3138dae-021c-4ebf-8ebd-81aed0fc0499" containerName="keystone-cron" Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.593915 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ba2d8b-b186-4d49-adb1-4765e193e2b9" containerName="copy" Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.593949 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ba2d8b-b186-4d49-adb1-4765e193e2b9" containerName="gather" Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.593988 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3138dae-021c-4ebf-8ebd-81aed0fc0499" containerName="keystone-cron" Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.596288 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjcrx" Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.605996 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jjcrx"] Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.670493 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d745f2-3e63-47ef-b190-c837c9277122-utilities\") pod \"community-operators-jjcrx\" (UID: \"b1d745f2-3e63-47ef-b190-c837c9277122\") " pod="openshift-marketplace/community-operators-jjcrx" Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.670949 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ss4q\" (UniqueName: \"kubernetes.io/projected/b1d745f2-3e63-47ef-b190-c837c9277122-kube-api-access-2ss4q\") pod \"community-operators-jjcrx\" (UID: \"b1d745f2-3e63-47ef-b190-c837c9277122\") " pod="openshift-marketplace/community-operators-jjcrx" Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.670976 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d745f2-3e63-47ef-b190-c837c9277122-catalog-content\") pod \"community-operators-jjcrx\" (UID: \"b1d745f2-3e63-47ef-b190-c837c9277122\") " pod="openshift-marketplace/community-operators-jjcrx" Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.771863 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d745f2-3e63-47ef-b190-c837c9277122-utilities\") pod \"community-operators-jjcrx\" (UID: \"b1d745f2-3e63-47ef-b190-c837c9277122\") " pod="openshift-marketplace/community-operators-jjcrx" Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.772193 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ss4q\" (UniqueName: \"kubernetes.io/projected/b1d745f2-3e63-47ef-b190-c837c9277122-kube-api-access-2ss4q\") pod \"community-operators-jjcrx\" (UID: \"b1d745f2-3e63-47ef-b190-c837c9277122\") " pod="openshift-marketplace/community-operators-jjcrx" Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.772298 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d745f2-3e63-47ef-b190-c837c9277122-catalog-content\") pod \"community-operators-jjcrx\" (UID: \"b1d745f2-3e63-47ef-b190-c837c9277122\") " pod="openshift-marketplace/community-operators-jjcrx" Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.772397 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d745f2-3e63-47ef-b190-c837c9277122-utilities\") pod \"community-operators-jjcrx\" (UID: \"b1d745f2-3e63-47ef-b190-c837c9277122\") " pod="openshift-marketplace/community-operators-jjcrx" Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.772847 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d745f2-3e63-47ef-b190-c837c9277122-catalog-content\") pod \"community-operators-jjcrx\" (UID: \"b1d745f2-3e63-47ef-b190-c837c9277122\") " pod="openshift-marketplace/community-operators-jjcrx" Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.792887 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ss4q\" (UniqueName: \"kubernetes.io/projected/b1d745f2-3e63-47ef-b190-c837c9277122-kube-api-access-2ss4q\") pod \"community-operators-jjcrx\" (UID: \"b1d745f2-3e63-47ef-b190-c837c9277122\") " pod="openshift-marketplace/community-operators-jjcrx" Feb 17 19:04:43 crc kubenswrapper[4680]: I0217 19:04:43.938168 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjcrx" Feb 17 19:04:44 crc kubenswrapper[4680]: I0217 19:04:44.460089 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jjcrx"] Feb 17 19:04:44 crc kubenswrapper[4680]: W0217 19:04:44.461200 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1d745f2_3e63_47ef_b190_c837c9277122.slice/crio-c5df0c4884ba76fcc0ad426e4a34375d5dd5a62e44478b146a645c672bb22a31 WatchSource:0}: Error finding container c5df0c4884ba76fcc0ad426e4a34375d5dd5a62e44478b146a645c672bb22a31: Status 404 returned error can't find the container with id c5df0c4884ba76fcc0ad426e4a34375d5dd5a62e44478b146a645c672bb22a31 Feb 17 19:04:45 crc kubenswrapper[4680]: I0217 19:04:45.045808 4680 generic.go:334] "Generic (PLEG): container finished" podID="b1d745f2-3e63-47ef-b190-c837c9277122" containerID="220596abf83ccc150aed0e7cd43e488d649bf8ef330ff42491cadab75d213466" exitCode=0 Feb 17 19:04:45 crc kubenswrapper[4680]: I0217 19:04:45.045864 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjcrx" event={"ID":"b1d745f2-3e63-47ef-b190-c837c9277122","Type":"ContainerDied","Data":"220596abf83ccc150aed0e7cd43e488d649bf8ef330ff42491cadab75d213466"} Feb 17 19:04:45 crc kubenswrapper[4680]: I0217 19:04:45.046039 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjcrx" event={"ID":"b1d745f2-3e63-47ef-b190-c837c9277122","Type":"ContainerStarted","Data":"c5df0c4884ba76fcc0ad426e4a34375d5dd5a62e44478b146a645c672bb22a31"} Feb 17 19:04:45 crc kubenswrapper[4680]: I0217 19:04:45.049430 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 19:04:46 crc kubenswrapper[4680]: I0217 19:04:46.055491 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjcrx" event={"ID":"b1d745f2-3e63-47ef-b190-c837c9277122","Type":"ContainerStarted","Data":"fb24f661d386cf17ce5ff1cac5582a36f85f80598fc8cda28d1a7505327e8c0b"} Feb 17 19:04:46 crc kubenswrapper[4680]: I0217 19:04:46.581921 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dmvp9"] Feb 17 19:04:46 crc kubenswrapper[4680]: I0217 19:04:46.583974 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dmvp9" Feb 17 19:04:46 crc kubenswrapper[4680]: I0217 19:04:46.595073 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dmvp9"] Feb 17 19:04:46 crc kubenswrapper[4680]: I0217 19:04:46.733072 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e9a5c44-14e2-4cad-a733-b216381419f9-utilities\") pod \"redhat-operators-dmvp9\" (UID: \"2e9a5c44-14e2-4cad-a733-b216381419f9\") " pod="openshift-marketplace/redhat-operators-dmvp9" Feb 17 19:04:46 crc kubenswrapper[4680]: I0217 19:04:46.733135 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2hl2\" (UniqueName: \"kubernetes.io/projected/2e9a5c44-14e2-4cad-a733-b216381419f9-kube-api-access-v2hl2\") pod \"redhat-operators-dmvp9\" (UID: \"2e9a5c44-14e2-4cad-a733-b216381419f9\") " pod="openshift-marketplace/redhat-operators-dmvp9" Feb 17 19:04:46 crc kubenswrapper[4680]: I0217 19:04:46.733344 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e9a5c44-14e2-4cad-a733-b216381419f9-catalog-content\") pod \"redhat-operators-dmvp9\" (UID: \"2e9a5c44-14e2-4cad-a733-b216381419f9\") " pod="openshift-marketplace/redhat-operators-dmvp9" Feb 17 19:04:46 crc kubenswrapper[4680]: I0217 19:04:46.835590 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e9a5c44-14e2-4cad-a733-b216381419f9-catalog-content\") pod \"redhat-operators-dmvp9\" (UID: \"2e9a5c44-14e2-4cad-a733-b216381419f9\") " pod="openshift-marketplace/redhat-operators-dmvp9" Feb 17 19:04:46 crc kubenswrapper[4680]: I0217 19:04:46.835732 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e9a5c44-14e2-4cad-a733-b216381419f9-utilities\") pod \"redhat-operators-dmvp9\" (UID: \"2e9a5c44-14e2-4cad-a733-b216381419f9\") " pod="openshift-marketplace/redhat-operators-dmvp9" Feb 17 19:04:46 crc kubenswrapper[4680]: I0217 19:04:46.835757 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2hl2\" (UniqueName: \"kubernetes.io/projected/2e9a5c44-14e2-4cad-a733-b216381419f9-kube-api-access-v2hl2\") pod \"redhat-operators-dmvp9\" (UID: \"2e9a5c44-14e2-4cad-a733-b216381419f9\") " pod="openshift-marketplace/redhat-operators-dmvp9" Feb 17 19:04:46 crc kubenswrapper[4680]: I0217 19:04:46.836573 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e9a5c44-14e2-4cad-a733-b216381419f9-catalog-content\") pod \"redhat-operators-dmvp9\" (UID: \"2e9a5c44-14e2-4cad-a733-b216381419f9\") " pod="openshift-marketplace/redhat-operators-dmvp9" Feb 17 19:04:46 crc kubenswrapper[4680]: I0217 19:04:46.836789 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e9a5c44-14e2-4cad-a733-b216381419f9-utilities\") pod \"redhat-operators-dmvp9\" (UID: \"2e9a5c44-14e2-4cad-a733-b216381419f9\") " pod="openshift-marketplace/redhat-operators-dmvp9" Feb 17 19:04:46 crc kubenswrapper[4680]: I0217 19:04:46.856048 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2hl2\" (UniqueName: \"kubernetes.io/projected/2e9a5c44-14e2-4cad-a733-b216381419f9-kube-api-access-v2hl2\") pod \"redhat-operators-dmvp9\" (UID: \"2e9a5c44-14e2-4cad-a733-b216381419f9\") " pod="openshift-marketplace/redhat-operators-dmvp9" Feb 17 19:04:46 crc kubenswrapper[4680]: I0217 19:04:46.916209 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dmvp9" Feb 17 19:04:47 crc kubenswrapper[4680]: I0217 19:04:47.377367 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dmvp9"] Feb 17 19:04:48 crc kubenswrapper[4680]: I0217 19:04:48.073060 4680 generic.go:334] "Generic (PLEG): container finished" podID="b1d745f2-3e63-47ef-b190-c837c9277122" containerID="fb24f661d386cf17ce5ff1cac5582a36f85f80598fc8cda28d1a7505327e8c0b" exitCode=0 Feb 17 19:04:48 crc kubenswrapper[4680]: I0217 19:04:48.073146 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjcrx" event={"ID":"b1d745f2-3e63-47ef-b190-c837c9277122","Type":"ContainerDied","Data":"fb24f661d386cf17ce5ff1cac5582a36f85f80598fc8cda28d1a7505327e8c0b"} Feb 17 19:04:48 crc kubenswrapper[4680]: I0217 19:04:48.076130 4680 generic.go:334] "Generic (PLEG): container finished" podID="2e9a5c44-14e2-4cad-a733-b216381419f9" containerID="68bec28ca6098419057d06d987b2929036c28a751a5f5400599b80c2b7a3d482" exitCode=0 Feb 17 19:04:48 crc kubenswrapper[4680]: I0217 19:04:48.076198 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmvp9" event={"ID":"2e9a5c44-14e2-4cad-a733-b216381419f9","Type":"ContainerDied","Data":"68bec28ca6098419057d06d987b2929036c28a751a5f5400599b80c2b7a3d482"} Feb 17 19:04:48 crc kubenswrapper[4680]: I0217 19:04:48.076229 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmvp9" event={"ID":"2e9a5c44-14e2-4cad-a733-b216381419f9","Type":"ContainerStarted","Data":"e433274966621c2cd871def303d38fc5210ec67f766f9e022d4de656f2f48357"} Feb 17 19:04:49 crc kubenswrapper[4680]: I0217 19:04:49.086657 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjcrx" event={"ID":"b1d745f2-3e63-47ef-b190-c837c9277122","Type":"ContainerStarted","Data":"0440fb673c68bdbf3710e5727dc769b7bba6264ed83a086e402612c054ddee7d"} Feb 17 19:04:49 crc kubenswrapper[4680]: I0217 19:04:49.110518 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jjcrx" podStartSLOduration=2.674612181 podStartE2EDuration="6.110502097s" podCreationTimestamp="2026-02-17 19:04:43 +0000 UTC" firstStartedPulling="2026-02-17 19:04:45.049205594 +0000 UTC m=+4814.600104273" lastFinishedPulling="2026-02-17 19:04:48.4850955 +0000 UTC m=+4818.035994189" observedRunningTime="2026-02-17 19:04:49.106772261 +0000 UTC m=+4818.657670940" watchObservedRunningTime="2026-02-17 19:04:49.110502097 +0000 UTC m=+4818.661400776" Feb 17 19:04:50 crc kubenswrapper[4680]: I0217 19:04:50.094859 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmvp9" event={"ID":"2e9a5c44-14e2-4cad-a733-b216381419f9","Type":"ContainerStarted","Data":"4fc97a5800f9420e989aacf258ce5e832eb6d94220bdb2bdf6ddb3c7ef04ebde"} Feb 17 19:04:53 crc kubenswrapper[4680]: I0217 19:04:53.939268 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jjcrx" Feb 17 19:04:53 crc kubenswrapper[4680]: I0217 19:04:53.941181 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jjcrx" Feb 17 19:04:53 crc kubenswrapper[4680]: I0217 19:04:53.994410 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jjcrx" Feb 17 19:04:54 crc kubenswrapper[4680]: I0217 19:04:54.140670 4680 generic.go:334] "Generic (PLEG): container finished" podID="2e9a5c44-14e2-4cad-a733-b216381419f9" containerID="4fc97a5800f9420e989aacf258ce5e832eb6d94220bdb2bdf6ddb3c7ef04ebde" exitCode=0 Feb 17 19:04:54 crc kubenswrapper[4680]: I0217 19:04:54.140744 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmvp9" event={"ID":"2e9a5c44-14e2-4cad-a733-b216381419f9","Type":"ContainerDied","Data":"4fc97a5800f9420e989aacf258ce5e832eb6d94220bdb2bdf6ddb3c7ef04ebde"} Feb 17 19:04:54 crc kubenswrapper[4680]: I0217 19:04:54.206482 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jjcrx" Feb 17 19:04:55 crc kubenswrapper[4680]: I0217 19:04:55.151700 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmvp9" event={"ID":"2e9a5c44-14e2-4cad-a733-b216381419f9","Type":"ContainerStarted","Data":"62620d1c306f3a9559bc8228a4e8ec66582a8b41b60fd217cb9fb3271182c0c8"} Feb 17 19:04:55 crc kubenswrapper[4680]: I0217 19:04:55.185063 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dmvp9" podStartSLOduration=2.744704697 podStartE2EDuration="9.18497737s" podCreationTimestamp="2026-02-17 19:04:46 +0000 UTC" firstStartedPulling="2026-02-17 19:04:48.087820164 +0000 UTC m=+4817.638718843" lastFinishedPulling="2026-02-17 19:04:54.528092837 +0000 UTC m=+4824.078991516" observedRunningTime="2026-02-17 19:04:55.171067925 +0000 UTC m=+4824.721966594" watchObservedRunningTime="2026-02-17 19:04:55.18497737 +0000 UTC m=+4824.735876059" Feb 17 19:04:56 crc kubenswrapper[4680]: I0217 19:04:56.573577 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jjcrx"] Feb 17 19:04:56 crc kubenswrapper[4680]: I0217 19:04:56.574171 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jjcrx" podUID="b1d745f2-3e63-47ef-b190-c837c9277122" containerName="registry-server" containerID="cri-o://0440fb673c68bdbf3710e5727dc769b7bba6264ed83a086e402612c054ddee7d" gracePeriod=2 Feb 17 19:04:56 crc kubenswrapper[4680]: I0217 19:04:56.916971 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dmvp9" Feb 17 19:04:56 crc kubenswrapper[4680]: I0217 19:04:56.918044 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dmvp9" Feb 17 19:04:57 crc kubenswrapper[4680]: I0217 19:04:57.214372 4680 generic.go:334] "Generic (PLEG): container finished" podID="b1d745f2-3e63-47ef-b190-c837c9277122" containerID="0440fb673c68bdbf3710e5727dc769b7bba6264ed83a086e402612c054ddee7d" exitCode=0 Feb 17 19:04:57 crc kubenswrapper[4680]: I0217 19:04:57.215235 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjcrx" event={"ID":"b1d745f2-3e63-47ef-b190-c837c9277122","Type":"ContainerDied","Data":"0440fb673c68bdbf3710e5727dc769b7bba6264ed83a086e402612c054ddee7d"} Feb 17 19:04:57 crc kubenswrapper[4680]: I0217 19:04:57.894941 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjcrx" Feb 17 19:04:57 crc kubenswrapper[4680]: I0217 19:04:57.948261 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d745f2-3e63-47ef-b190-c837c9277122-catalog-content\") pod \"b1d745f2-3e63-47ef-b190-c837c9277122\" (UID: \"b1d745f2-3e63-47ef-b190-c837c9277122\") " Feb 17 19:04:57 crc kubenswrapper[4680]: I0217 19:04:57.948518 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ss4q\" (UniqueName: \"kubernetes.io/projected/b1d745f2-3e63-47ef-b190-c837c9277122-kube-api-access-2ss4q\") pod \"b1d745f2-3e63-47ef-b190-c837c9277122\" (UID: \"b1d745f2-3e63-47ef-b190-c837c9277122\") " Feb 17 19:04:57 crc kubenswrapper[4680]: I0217 19:04:57.948631 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d745f2-3e63-47ef-b190-c837c9277122-utilities\") pod \"b1d745f2-3e63-47ef-b190-c837c9277122\" (UID: \"b1d745f2-3e63-47ef-b190-c837c9277122\") " Feb 17 19:04:57 crc kubenswrapper[4680]: I0217 19:04:57.949225 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1d745f2-3e63-47ef-b190-c837c9277122-utilities" (OuterVolumeSpecName: "utilities") pod "b1d745f2-3e63-47ef-b190-c837c9277122" (UID: "b1d745f2-3e63-47ef-b190-c837c9277122"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 19:04:57 crc kubenswrapper[4680]: I0217 19:04:57.954080 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d745f2-3e63-47ef-b190-c837c9277122-kube-api-access-2ss4q" (OuterVolumeSpecName: "kube-api-access-2ss4q") pod "b1d745f2-3e63-47ef-b190-c837c9277122" (UID: "b1d745f2-3e63-47ef-b190-c837c9277122"). InnerVolumeSpecName "kube-api-access-2ss4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 19:04:57 crc kubenswrapper[4680]: I0217 19:04:57.998722 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1d745f2-3e63-47ef-b190-c837c9277122-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1d745f2-3e63-47ef-b190-c837c9277122" (UID: "b1d745f2-3e63-47ef-b190-c837c9277122"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 19:04:58 crc kubenswrapper[4680]: I0217 19:04:58.050684 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ss4q\" (UniqueName: \"kubernetes.io/projected/b1d745f2-3e63-47ef-b190-c837c9277122-kube-api-access-2ss4q\") on node \"crc\" DevicePath \"\"" Feb 17 19:04:58 crc kubenswrapper[4680]: I0217 19:04:58.050713 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d745f2-3e63-47ef-b190-c837c9277122-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 19:04:58 crc kubenswrapper[4680]: I0217 19:04:58.050757 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d745f2-3e63-47ef-b190-c837c9277122-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 19:04:58 crc kubenswrapper[4680]: I0217 19:04:58.222913 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dmvp9" podUID="2e9a5c44-14e2-4cad-a733-b216381419f9" containerName="registry-server" probeResult="failure" output=< Feb 17 19:04:58 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 19:04:58 crc kubenswrapper[4680]: > Feb 17 19:04:58 crc kubenswrapper[4680]: I0217 19:04:58.224162 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjcrx" event={"ID":"b1d745f2-3e63-47ef-b190-c837c9277122","Type":"ContainerDied","Data":"c5df0c4884ba76fcc0ad426e4a34375d5dd5a62e44478b146a645c672bb22a31"} Feb 17 19:04:58 crc kubenswrapper[4680]: I0217 19:04:58.224222 4680 scope.go:117] "RemoveContainer" containerID="0440fb673c68bdbf3710e5727dc769b7bba6264ed83a086e402612c054ddee7d" Feb 17 19:04:58 crc kubenswrapper[4680]: I0217 19:04:58.224224 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjcrx" Feb 17 19:04:58 crc kubenswrapper[4680]: I0217 19:04:58.252792 4680 scope.go:117] "RemoveContainer" containerID="fb24f661d386cf17ce5ff1cac5582a36f85f80598fc8cda28d1a7505327e8c0b" Feb 17 19:04:58 crc kubenswrapper[4680]: I0217 19:04:58.257134 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jjcrx"] Feb 17 19:04:58 crc kubenswrapper[4680]: I0217 19:04:58.264824 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jjcrx"] Feb 17 19:04:58 crc kubenswrapper[4680]: I0217 19:04:58.274943 4680 scope.go:117] "RemoveContainer" containerID="220596abf83ccc150aed0e7cd43e488d649bf8ef330ff42491cadab75d213466" Feb 17 19:04:59 crc kubenswrapper[4680]: I0217 19:04:59.115217 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1d745f2-3e63-47ef-b190-c837c9277122" path="/var/lib/kubelet/pods/b1d745f2-3e63-47ef-b190-c837c9277122/volumes" Feb 17 19:05:08 crc kubenswrapper[4680]: I0217 19:05:08.349215 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dmvp9" podUID="2e9a5c44-14e2-4cad-a733-b216381419f9" containerName="registry-server" probeResult="failure" output=< Feb 17 19:05:08 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 19:05:08 crc kubenswrapper[4680]: > Feb 17 19:05:16 crc kubenswrapper[4680]: I0217 19:05:16.971345 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dmvp9" Feb 17 19:05:17 crc kubenswrapper[4680]: I0217 19:05:17.024290 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dmvp9" Feb 17 19:05:17 crc kubenswrapper[4680]: I0217 19:05:17.781362 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dmvp9"] Feb 17 19:05:18 crc kubenswrapper[4680]: I0217 19:05:18.407925 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dmvp9" podUID="2e9a5c44-14e2-4cad-a733-b216381419f9" containerName="registry-server" containerID="cri-o://62620d1c306f3a9559bc8228a4e8ec66582a8b41b60fd217cb9fb3271182c0c8" gracePeriod=2 Feb 17 19:05:18 crc kubenswrapper[4680]: I0217 19:05:18.882398 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dmvp9" Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.067534 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e9a5c44-14e2-4cad-a733-b216381419f9-catalog-content\") pod \"2e9a5c44-14e2-4cad-a733-b216381419f9\" (UID: \"2e9a5c44-14e2-4cad-a733-b216381419f9\") " Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.067634 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2hl2\" (UniqueName: \"kubernetes.io/projected/2e9a5c44-14e2-4cad-a733-b216381419f9-kube-api-access-v2hl2\") pod \"2e9a5c44-14e2-4cad-a733-b216381419f9\" (UID: \"2e9a5c44-14e2-4cad-a733-b216381419f9\") " Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.067818 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e9a5c44-14e2-4cad-a733-b216381419f9-utilities\") pod \"2e9a5c44-14e2-4cad-a733-b216381419f9\" (UID: \"2e9a5c44-14e2-4cad-a733-b216381419f9\") " Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.068694 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e9a5c44-14e2-4cad-a733-b216381419f9-utilities" (OuterVolumeSpecName: "utilities") pod "2e9a5c44-14e2-4cad-a733-b216381419f9" (UID: "2e9a5c44-14e2-4cad-a733-b216381419f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.076070 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e9a5c44-14e2-4cad-a733-b216381419f9-kube-api-access-v2hl2" (OuterVolumeSpecName: "kube-api-access-v2hl2") pod "2e9a5c44-14e2-4cad-a733-b216381419f9" (UID: "2e9a5c44-14e2-4cad-a733-b216381419f9"). InnerVolumeSpecName "kube-api-access-v2hl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.170743 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2hl2\" (UniqueName: \"kubernetes.io/projected/2e9a5c44-14e2-4cad-a733-b216381419f9-kube-api-access-v2hl2\") on node \"crc\" DevicePath \"\"" Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.170839 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e9a5c44-14e2-4cad-a733-b216381419f9-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.203827 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e9a5c44-14e2-4cad-a733-b216381419f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e9a5c44-14e2-4cad-a733-b216381419f9" (UID: "2e9a5c44-14e2-4cad-a733-b216381419f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.273601 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e9a5c44-14e2-4cad-a733-b216381419f9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.422518 4680 generic.go:334] "Generic (PLEG): container finished" podID="2e9a5c44-14e2-4cad-a733-b216381419f9" containerID="62620d1c306f3a9559bc8228a4e8ec66582a8b41b60fd217cb9fb3271182c0c8" exitCode=0 Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.422767 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmvp9" event={"ID":"2e9a5c44-14e2-4cad-a733-b216381419f9","Type":"ContainerDied","Data":"62620d1c306f3a9559bc8228a4e8ec66582a8b41b60fd217cb9fb3271182c0c8"} Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.424489 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmvp9" event={"ID":"2e9a5c44-14e2-4cad-a733-b216381419f9","Type":"ContainerDied","Data":"e433274966621c2cd871def303d38fc5210ec67f766f9e022d4de656f2f48357"} Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.422920 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dmvp9" Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.424565 4680 scope.go:117] "RemoveContainer" containerID="62620d1c306f3a9559bc8228a4e8ec66582a8b41b60fd217cb9fb3271182c0c8" Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.466914 4680 scope.go:117] "RemoveContainer" containerID="4fc97a5800f9420e989aacf258ce5e832eb6d94220bdb2bdf6ddb3c7ef04ebde" Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.470249 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dmvp9"] Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.478834 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dmvp9"] Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.491336 4680 scope.go:117] "RemoveContainer" containerID="68bec28ca6098419057d06d987b2929036c28a751a5f5400599b80c2b7a3d482" Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.550996 4680 scope.go:117] "RemoveContainer" containerID="62620d1c306f3a9559bc8228a4e8ec66582a8b41b60fd217cb9fb3271182c0c8" Feb 17 19:05:19 crc kubenswrapper[4680]: E0217 19:05:19.551421 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62620d1c306f3a9559bc8228a4e8ec66582a8b41b60fd217cb9fb3271182c0c8\": container with ID starting with 62620d1c306f3a9559bc8228a4e8ec66582a8b41b60fd217cb9fb3271182c0c8 not found: ID does not exist" containerID="62620d1c306f3a9559bc8228a4e8ec66582a8b41b60fd217cb9fb3271182c0c8" Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.551457 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62620d1c306f3a9559bc8228a4e8ec66582a8b41b60fd217cb9fb3271182c0c8"} err="failed to get container status \"62620d1c306f3a9559bc8228a4e8ec66582a8b41b60fd217cb9fb3271182c0c8\": rpc error: code = NotFound desc = could not find container \"62620d1c306f3a9559bc8228a4e8ec66582a8b41b60fd217cb9fb3271182c0c8\": container with ID starting with 62620d1c306f3a9559bc8228a4e8ec66582a8b41b60fd217cb9fb3271182c0c8 not found: ID does not exist" Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.551484 4680 scope.go:117] "RemoveContainer" containerID="4fc97a5800f9420e989aacf258ce5e832eb6d94220bdb2bdf6ddb3c7ef04ebde" Feb 17 19:05:19 crc kubenswrapper[4680]: E0217 19:05:19.551893 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc97a5800f9420e989aacf258ce5e832eb6d94220bdb2bdf6ddb3c7ef04ebde\": container with ID starting with 4fc97a5800f9420e989aacf258ce5e832eb6d94220bdb2bdf6ddb3c7ef04ebde not found: ID does not exist" containerID="4fc97a5800f9420e989aacf258ce5e832eb6d94220bdb2bdf6ddb3c7ef04ebde" Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.551920 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc97a5800f9420e989aacf258ce5e832eb6d94220bdb2bdf6ddb3c7ef04ebde"} err="failed to get container status \"4fc97a5800f9420e989aacf258ce5e832eb6d94220bdb2bdf6ddb3c7ef04ebde\": rpc error: code = NotFound desc = could not find container \"4fc97a5800f9420e989aacf258ce5e832eb6d94220bdb2bdf6ddb3c7ef04ebde\": container with ID starting with 4fc97a5800f9420e989aacf258ce5e832eb6d94220bdb2bdf6ddb3c7ef04ebde not found: ID does not exist" Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.551934 4680 scope.go:117] "RemoveContainer" containerID="68bec28ca6098419057d06d987b2929036c28a751a5f5400599b80c2b7a3d482" Feb 17 19:05:19 crc kubenswrapper[4680]: E0217 19:05:19.552193 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68bec28ca6098419057d06d987b2929036c28a751a5f5400599b80c2b7a3d482\": container with ID starting with 68bec28ca6098419057d06d987b2929036c28a751a5f5400599b80c2b7a3d482 not found: ID does not exist" containerID="68bec28ca6098419057d06d987b2929036c28a751a5f5400599b80c2b7a3d482" Feb 17 19:05:19 crc kubenswrapper[4680]: I0217 19:05:19.552216 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68bec28ca6098419057d06d987b2929036c28a751a5f5400599b80c2b7a3d482"} err="failed to get container status \"68bec28ca6098419057d06d987b2929036c28a751a5f5400599b80c2b7a3d482\": rpc error: code = NotFound desc = could not find container \"68bec28ca6098419057d06d987b2929036c28a751a5f5400599b80c2b7a3d482\": container with ID starting with 68bec28ca6098419057d06d987b2929036c28a751a5f5400599b80c2b7a3d482 not found: ID does not exist" Feb 17 19:05:21 crc kubenswrapper[4680]: I0217 19:05:21.115376 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e9a5c44-14e2-4cad-a733-b216381419f9" path="/var/lib/kubelet/pods/2e9a5c44-14e2-4cad-a733-b216381419f9/volumes" Feb 17 19:05:36 crc kubenswrapper[4680]: I0217 19:05:36.903601 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rrvkc/must-gather-7gjz6"] Feb 17 19:05:36 crc kubenswrapper[4680]: E0217 19:05:36.904430 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d745f2-3e63-47ef-b190-c837c9277122" containerName="extract-utilities" Feb 17 19:05:36 crc kubenswrapper[4680]: I0217 19:05:36.904442 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d745f2-3e63-47ef-b190-c837c9277122" containerName="extract-utilities" Feb 17 19:05:36 crc kubenswrapper[4680]: E0217 19:05:36.904455 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9a5c44-14e2-4cad-a733-b216381419f9" containerName="extract-utilities" Feb 17 19:05:36 crc kubenswrapper[4680]: I0217 19:05:36.904461 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9a5c44-14e2-4cad-a733-b216381419f9" containerName="extract-utilities" Feb 17 19:05:36 crc kubenswrapper[4680]: E0217 19:05:36.904468 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9a5c44-14e2-4cad-a733-b216381419f9" containerName="extract-content" Feb 17 19:05:36 crc kubenswrapper[4680]: I0217 19:05:36.904477 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9a5c44-14e2-4cad-a733-b216381419f9" containerName="extract-content" Feb 17 19:05:36 crc kubenswrapper[4680]: E0217 19:05:36.904499 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9a5c44-14e2-4cad-a733-b216381419f9" containerName="registry-server" Feb 17 19:05:36 crc kubenswrapper[4680]: I0217 19:05:36.904505 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9a5c44-14e2-4cad-a733-b216381419f9" containerName="registry-server" Feb 17 19:05:36 crc kubenswrapper[4680]: E0217 19:05:36.904525 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d745f2-3e63-47ef-b190-c837c9277122" containerName="extract-content" Feb 17 19:05:36 crc kubenswrapper[4680]: I0217 19:05:36.904531 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d745f2-3e63-47ef-b190-c837c9277122" containerName="extract-content" Feb 17 19:05:36 crc kubenswrapper[4680]: E0217 19:05:36.904538 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d745f2-3e63-47ef-b190-c837c9277122" containerName="registry-server" Feb 17 19:05:36 crc kubenswrapper[4680]: I0217 19:05:36.904543 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d745f2-3e63-47ef-b190-c837c9277122" containerName="registry-server" Feb 17 19:05:36 crc kubenswrapper[4680]: I0217 19:05:36.904705 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9a5c44-14e2-4cad-a733-b216381419f9" containerName="registry-server" Feb 17 19:05:36 crc kubenswrapper[4680]: I0217 19:05:36.904722 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d745f2-3e63-47ef-b190-c837c9277122" containerName="registry-server" Feb 17 19:05:36 crc kubenswrapper[4680]: I0217 19:05:36.905673 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrvkc/must-gather-7gjz6" Feb 17 19:05:36 crc kubenswrapper[4680]: I0217 19:05:36.913014 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rrvkc"/"kube-root-ca.crt" Feb 17 19:05:36 crc kubenswrapper[4680]: I0217 19:05:36.913065 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rrvkc"/"openshift-service-ca.crt" Feb 17 19:05:36 crc kubenswrapper[4680]: I0217 19:05:36.913475 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rrvkc"/"default-dockercfg-f6nfm" Feb 17 19:05:36 crc kubenswrapper[4680]: I0217 19:05:36.942382 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rrvkc/must-gather-7gjz6"] Feb 17 19:05:37 crc kubenswrapper[4680]: I0217 19:05:37.031067 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3ef5c723-d0a3-4d86-90fe-e8a90ebda679-must-gather-output\") pod \"must-gather-7gjz6\" (UID: \"3ef5c723-d0a3-4d86-90fe-e8a90ebda679\") " pod="openshift-must-gather-rrvkc/must-gather-7gjz6" Feb 17 19:05:37 crc kubenswrapper[4680]: I0217 19:05:37.031178 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpznm\" (UniqueName: \"kubernetes.io/projected/3ef5c723-d0a3-4d86-90fe-e8a90ebda679-kube-api-access-qpznm\") pod \"must-gather-7gjz6\" (UID: \"3ef5c723-d0a3-4d86-90fe-e8a90ebda679\") " pod="openshift-must-gather-rrvkc/must-gather-7gjz6" Feb 17 19:05:37 crc kubenswrapper[4680]: I0217 19:05:37.133478 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3ef5c723-d0a3-4d86-90fe-e8a90ebda679-must-gather-output\") pod \"must-gather-7gjz6\" (UID: \"3ef5c723-d0a3-4d86-90fe-e8a90ebda679\") " pod="openshift-must-gather-rrvkc/must-gather-7gjz6" Feb 17 19:05:37 crc kubenswrapper[4680]: I0217 19:05:37.133773 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpznm\" (UniqueName: \"kubernetes.io/projected/3ef5c723-d0a3-4d86-90fe-e8a90ebda679-kube-api-access-qpznm\") pod \"must-gather-7gjz6\" (UID: \"3ef5c723-d0a3-4d86-90fe-e8a90ebda679\") " pod="openshift-must-gather-rrvkc/must-gather-7gjz6" Feb 17 19:05:37 crc kubenswrapper[4680]: I0217 19:05:37.133970 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3ef5c723-d0a3-4d86-90fe-e8a90ebda679-must-gather-output\") pod \"must-gather-7gjz6\" (UID: \"3ef5c723-d0a3-4d86-90fe-e8a90ebda679\") " pod="openshift-must-gather-rrvkc/must-gather-7gjz6" Feb 17 19:05:37 crc kubenswrapper[4680]: I0217 19:05:37.156349 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpznm\" (UniqueName: \"kubernetes.io/projected/3ef5c723-d0a3-4d86-90fe-e8a90ebda679-kube-api-access-qpznm\") pod \"must-gather-7gjz6\" (UID: \"3ef5c723-d0a3-4d86-90fe-e8a90ebda679\") " pod="openshift-must-gather-rrvkc/must-gather-7gjz6" Feb 17 19:05:37 crc kubenswrapper[4680]: I0217 19:05:37.230749 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrvkc/must-gather-7gjz6" Feb 17 19:05:37 crc kubenswrapper[4680]: I0217 19:05:37.737099 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rrvkc/must-gather-7gjz6"] Feb 17 19:05:38 crc kubenswrapper[4680]: I0217 19:05:38.586028 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrvkc/must-gather-7gjz6" event={"ID":"3ef5c723-d0a3-4d86-90fe-e8a90ebda679","Type":"ContainerStarted","Data":"55236760eb39d6f6be0f1c6da88328900432df36041cfb039736679995becd75"} Feb 17 19:05:38 crc kubenswrapper[4680]: I0217 19:05:38.586415 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrvkc/must-gather-7gjz6" event={"ID":"3ef5c723-d0a3-4d86-90fe-e8a90ebda679","Type":"ContainerStarted","Data":"67dc94b206fbddb6ac44785f2e8eaf1bae7ca776f2af0e90c03bcd72d390b4f7"} Feb 17 19:05:38 crc kubenswrapper[4680]: I0217 19:05:38.586426 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrvkc/must-gather-7gjz6" event={"ID":"3ef5c723-d0a3-4d86-90fe-e8a90ebda679","Type":"ContainerStarted","Data":"fb2a72e5872ff036e9a1521c6152990c742c65db60e0360f6412b531d22a9457"} Feb 17 19:05:38 crc kubenswrapper[4680]: I0217 19:05:38.609861 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rrvkc/must-gather-7gjz6" podStartSLOduration=2.609844737 podStartE2EDuration="2.609844737s" podCreationTimestamp="2026-02-17 19:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 19:05:38.603760197 +0000 UTC m=+4868.154658876" watchObservedRunningTime="2026-02-17 19:05:38.609844737 +0000 UTC m=+4868.160743416" Feb 17 19:05:42 crc kubenswrapper[4680]: I0217 19:05:42.673869 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rrvkc/crc-debug-9dplw"] Feb 17 19:05:42 crc kubenswrapper[4680]: I0217 19:05:42.675374 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrvkc/crc-debug-9dplw" Feb 17 19:05:42 crc kubenswrapper[4680]: I0217 19:05:42.736126 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfrt7\" (UniqueName: \"kubernetes.io/projected/df6d343e-727c-48f7-bf32-d4b59688fa84-kube-api-access-rfrt7\") pod \"crc-debug-9dplw\" (UID: \"df6d343e-727c-48f7-bf32-d4b59688fa84\") " pod="openshift-must-gather-rrvkc/crc-debug-9dplw" Feb 17 19:05:42 crc kubenswrapper[4680]: I0217 19:05:42.736168 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df6d343e-727c-48f7-bf32-d4b59688fa84-host\") pod \"crc-debug-9dplw\" (UID: \"df6d343e-727c-48f7-bf32-d4b59688fa84\") " pod="openshift-must-gather-rrvkc/crc-debug-9dplw" Feb 17 19:05:42 crc kubenswrapper[4680]: I0217 19:05:42.837617 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfrt7\" (UniqueName: \"kubernetes.io/projected/df6d343e-727c-48f7-bf32-d4b59688fa84-kube-api-access-rfrt7\") pod \"crc-debug-9dplw\" (UID: \"df6d343e-727c-48f7-bf32-d4b59688fa84\") " pod="openshift-must-gather-rrvkc/crc-debug-9dplw" Feb 17 19:05:42 crc kubenswrapper[4680]: I0217 19:05:42.837950 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df6d343e-727c-48f7-bf32-d4b59688fa84-host\") pod \"crc-debug-9dplw\" (UID: \"df6d343e-727c-48f7-bf32-d4b59688fa84\") " pod="openshift-must-gather-rrvkc/crc-debug-9dplw" Feb 17 19:05:42 crc kubenswrapper[4680]: I0217 19:05:42.838237 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df6d343e-727c-48f7-bf32-d4b59688fa84-host\") pod \"crc-debug-9dplw\" (UID: \"df6d343e-727c-48f7-bf32-d4b59688fa84\") " pod="openshift-must-gather-rrvkc/crc-debug-9dplw" Feb 17 19:05:42 crc kubenswrapper[4680]: I0217 19:05:42.873329 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfrt7\" (UniqueName: \"kubernetes.io/projected/df6d343e-727c-48f7-bf32-d4b59688fa84-kube-api-access-rfrt7\") pod \"crc-debug-9dplw\" (UID: \"df6d343e-727c-48f7-bf32-d4b59688fa84\") " pod="openshift-must-gather-rrvkc/crc-debug-9dplw" Feb 17 19:05:43 crc kubenswrapper[4680]: I0217 19:05:43.003899 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrvkc/crc-debug-9dplw" Feb 17 19:05:43 crc kubenswrapper[4680]: I0217 19:05:43.624621 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrvkc/crc-debug-9dplw" event={"ID":"df6d343e-727c-48f7-bf32-d4b59688fa84","Type":"ContainerStarted","Data":"830a75380aaa214d80c3e65b22bd3950b5bb4d2241c54bcf842e5ba03b345a24"} Feb 17 19:05:43 crc kubenswrapper[4680]: I0217 19:05:43.625150 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrvkc/crc-debug-9dplw" event={"ID":"df6d343e-727c-48f7-bf32-d4b59688fa84","Type":"ContainerStarted","Data":"f802cd6360dcbec9112d488ff00c39d950328107a7f63c0e694a4e5dad19da20"} Feb 17 19:05:43 crc kubenswrapper[4680]: I0217 19:05:43.644378 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rrvkc/crc-debug-9dplw" podStartSLOduration=1.644364023 podStartE2EDuration="1.644364023s" podCreationTimestamp="2026-02-17 19:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 19:05:43.644005922 +0000 UTC m=+4873.194904601" watchObservedRunningTime="2026-02-17 19:05:43.644364023 +0000 UTC m=+4873.195262702" Feb 17 19:06:26 crc kubenswrapper[4680]: I0217 19:06:26.952042 4680 generic.go:334] "Generic (PLEG): container finished" podID="df6d343e-727c-48f7-bf32-d4b59688fa84" containerID="830a75380aaa214d80c3e65b22bd3950b5bb4d2241c54bcf842e5ba03b345a24" exitCode=0 Feb 17 19:06:26 crc kubenswrapper[4680]: I0217 19:06:26.952121 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrvkc/crc-debug-9dplw" event={"ID":"df6d343e-727c-48f7-bf32-d4b59688fa84","Type":"ContainerDied","Data":"830a75380aaa214d80c3e65b22bd3950b5bb4d2241c54bcf842e5ba03b345a24"} Feb 17 19:06:28 crc kubenswrapper[4680]: I0217 19:06:28.064663 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrvkc/crc-debug-9dplw" Feb 17 19:06:28 crc kubenswrapper[4680]: I0217 19:06:28.101556 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rrvkc/crc-debug-9dplw"] Feb 17 19:06:28 crc kubenswrapper[4680]: I0217 19:06:28.107877 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfrt7\" (UniqueName: \"kubernetes.io/projected/df6d343e-727c-48f7-bf32-d4b59688fa84-kube-api-access-rfrt7\") pod \"df6d343e-727c-48f7-bf32-d4b59688fa84\" (UID: \"df6d343e-727c-48f7-bf32-d4b59688fa84\") " Feb 17 19:06:28 crc kubenswrapper[4680]: I0217 19:06:28.107956 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df6d343e-727c-48f7-bf32-d4b59688fa84-host\") pod \"df6d343e-727c-48f7-bf32-d4b59688fa84\" (UID: \"df6d343e-727c-48f7-bf32-d4b59688fa84\") " Feb 17 19:06:28 crc kubenswrapper[4680]: I0217 19:06:28.108568 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df6d343e-727c-48f7-bf32-d4b59688fa84-host" (OuterVolumeSpecName: "host") pod "df6d343e-727c-48f7-bf32-d4b59688fa84" (UID: "df6d343e-727c-48f7-bf32-d4b59688fa84"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 19:06:28 crc kubenswrapper[4680]: I0217 19:06:28.108884 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rrvkc/crc-debug-9dplw"] Feb 17 19:06:28 crc kubenswrapper[4680]: I0217 19:06:28.114919 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df6d343e-727c-48f7-bf32-d4b59688fa84-kube-api-access-rfrt7" (OuterVolumeSpecName: "kube-api-access-rfrt7") pod "df6d343e-727c-48f7-bf32-d4b59688fa84" (UID: "df6d343e-727c-48f7-bf32-d4b59688fa84"). InnerVolumeSpecName "kube-api-access-rfrt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 19:06:28 crc kubenswrapper[4680]: I0217 19:06:28.210534 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfrt7\" (UniqueName: \"kubernetes.io/projected/df6d343e-727c-48f7-bf32-d4b59688fa84-kube-api-access-rfrt7\") on node \"crc\" DevicePath \"\"" Feb 17 19:06:28 crc kubenswrapper[4680]: I0217 19:06:28.210821 4680 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df6d343e-727c-48f7-bf32-d4b59688fa84-host\") on node \"crc\" DevicePath \"\"" Feb 17 19:06:28 crc kubenswrapper[4680]: I0217 19:06:28.974708 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f802cd6360dcbec9112d488ff00c39d950328107a7f63c0e694a4e5dad19da20" Feb 17 19:06:28 crc kubenswrapper[4680]: I0217 19:06:28.974783 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrvkc/crc-debug-9dplw" Feb 17 19:06:29 crc kubenswrapper[4680]: I0217 19:06:29.115542 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df6d343e-727c-48f7-bf32-d4b59688fa84" path="/var/lib/kubelet/pods/df6d343e-727c-48f7-bf32-d4b59688fa84/volumes" Feb 17 19:06:29 crc kubenswrapper[4680]: I0217 19:06:29.290994 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rrvkc/crc-debug-tgwb6"] Feb 17 19:06:29 crc kubenswrapper[4680]: E0217 19:06:29.291822 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6d343e-727c-48f7-bf32-d4b59688fa84" containerName="container-00" Feb 17 19:06:29 crc kubenswrapper[4680]: I0217 19:06:29.291891 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6d343e-727c-48f7-bf32-d4b59688fa84" containerName="container-00" Feb 17 19:06:29 crc kubenswrapper[4680]: I0217 19:06:29.292130 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="df6d343e-727c-48f7-bf32-d4b59688fa84" containerName="container-00" Feb 17 19:06:29 crc kubenswrapper[4680]: I0217 19:06:29.292816 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrvkc/crc-debug-tgwb6" Feb 17 19:06:29 crc kubenswrapper[4680]: I0217 19:06:29.329706 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn7ks\" (UniqueName: \"kubernetes.io/projected/d0426f7e-9801-45ad-a7d6-06dca08aa27b-kube-api-access-gn7ks\") pod \"crc-debug-tgwb6\" (UID: \"d0426f7e-9801-45ad-a7d6-06dca08aa27b\") " pod="openshift-must-gather-rrvkc/crc-debug-tgwb6" Feb 17 19:06:29 crc kubenswrapper[4680]: I0217 19:06:29.330000 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0426f7e-9801-45ad-a7d6-06dca08aa27b-host\") pod \"crc-debug-tgwb6\" (UID: \"d0426f7e-9801-45ad-a7d6-06dca08aa27b\") " pod="openshift-must-gather-rrvkc/crc-debug-tgwb6" Feb 17 19:06:29 crc kubenswrapper[4680]: I0217 19:06:29.431674 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn7ks\" (UniqueName: \"kubernetes.io/projected/d0426f7e-9801-45ad-a7d6-06dca08aa27b-kube-api-access-gn7ks\") pod \"crc-debug-tgwb6\" (UID: \"d0426f7e-9801-45ad-a7d6-06dca08aa27b\") " pod="openshift-must-gather-rrvkc/crc-debug-tgwb6" Feb 17 19:06:29 crc kubenswrapper[4680]: I0217 19:06:29.431770 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0426f7e-9801-45ad-a7d6-06dca08aa27b-host\") pod \"crc-debug-tgwb6\" (UID: \"d0426f7e-9801-45ad-a7d6-06dca08aa27b\") " pod="openshift-must-gather-rrvkc/crc-debug-tgwb6" Feb 17 19:06:29 crc kubenswrapper[4680]: I0217 19:06:29.432041 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0426f7e-9801-45ad-a7d6-06dca08aa27b-host\") pod \"crc-debug-tgwb6\" (UID: \"d0426f7e-9801-45ad-a7d6-06dca08aa27b\") " pod="openshift-must-gather-rrvkc/crc-debug-tgwb6" Feb 17 19:06:29 crc kubenswrapper[4680]: I0217 19:06:29.453979 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn7ks\" (UniqueName: \"kubernetes.io/projected/d0426f7e-9801-45ad-a7d6-06dca08aa27b-kube-api-access-gn7ks\") pod \"crc-debug-tgwb6\" (UID: \"d0426f7e-9801-45ad-a7d6-06dca08aa27b\") " pod="openshift-must-gather-rrvkc/crc-debug-tgwb6" Feb 17 19:06:29 crc kubenswrapper[4680]: I0217 19:06:29.612228 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrvkc/crc-debug-tgwb6" Feb 17 19:06:29 crc kubenswrapper[4680]: I0217 19:06:29.985560 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrvkc/crc-debug-tgwb6" event={"ID":"d0426f7e-9801-45ad-a7d6-06dca08aa27b","Type":"ContainerStarted","Data":"56a03e147d16f396947306a93fef52265b49463bb47a483708a3a51b2b7d23b2"} Feb 17 19:06:29 crc kubenswrapper[4680]: I0217 19:06:29.985889 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrvkc/crc-debug-tgwb6" event={"ID":"d0426f7e-9801-45ad-a7d6-06dca08aa27b","Type":"ContainerStarted","Data":"8a73b940f4e66bef1186152b1757175a090a627f67feb49047e7053ec2648e13"} Feb 17 19:06:29 crc kubenswrapper[4680]: I0217 19:06:29.998942 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rrvkc/crc-debug-tgwb6" podStartSLOduration=0.998923763 podStartE2EDuration="998.923763ms" podCreationTimestamp="2026-02-17 19:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 19:06:29.995460016 +0000 UTC m=+4919.546358695" watchObservedRunningTime="2026-02-17 19:06:29.998923763 +0000 UTC m=+4919.549822442" Feb 17 19:06:30 crc kubenswrapper[4680]: I0217 19:06:30.995822 4680 generic.go:334] "Generic (PLEG): container finished" podID="d0426f7e-9801-45ad-a7d6-06dca08aa27b" containerID="56a03e147d16f396947306a93fef52265b49463bb47a483708a3a51b2b7d23b2" exitCode=0 Feb 17 19:06:30 crc kubenswrapper[4680]: I0217 19:06:30.996898 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrvkc/crc-debug-tgwb6" event={"ID":"d0426f7e-9801-45ad-a7d6-06dca08aa27b","Type":"ContainerDied","Data":"56a03e147d16f396947306a93fef52265b49463bb47a483708a3a51b2b7d23b2"} Feb 17 19:06:32 crc kubenswrapper[4680]: I0217 19:06:32.095981 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrvkc/crc-debug-tgwb6" Feb 17 19:06:32 crc kubenswrapper[4680]: I0217 19:06:32.128937 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rrvkc/crc-debug-tgwb6"] Feb 17 19:06:32 crc kubenswrapper[4680]: I0217 19:06:32.136551 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rrvkc/crc-debug-tgwb6"] Feb 17 19:06:32 crc kubenswrapper[4680]: I0217 19:06:32.212989 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn7ks\" (UniqueName: \"kubernetes.io/projected/d0426f7e-9801-45ad-a7d6-06dca08aa27b-kube-api-access-gn7ks\") pod \"d0426f7e-9801-45ad-a7d6-06dca08aa27b\" (UID: \"d0426f7e-9801-45ad-a7d6-06dca08aa27b\") " Feb 17 19:06:32 crc kubenswrapper[4680]: I0217 19:06:32.213174 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0426f7e-9801-45ad-a7d6-06dca08aa27b-host\") pod \"d0426f7e-9801-45ad-a7d6-06dca08aa27b\" (UID: \"d0426f7e-9801-45ad-a7d6-06dca08aa27b\") " Feb 17 19:06:32 crc kubenswrapper[4680]: I0217 19:06:32.213274 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0426f7e-9801-45ad-a7d6-06dca08aa27b-host" (OuterVolumeSpecName: "host") pod "d0426f7e-9801-45ad-a7d6-06dca08aa27b" (UID: "d0426f7e-9801-45ad-a7d6-06dca08aa27b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 19:06:32 crc kubenswrapper[4680]: I0217 19:06:32.213860 4680 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0426f7e-9801-45ad-a7d6-06dca08aa27b-host\") on node \"crc\" DevicePath \"\"" Feb 17 19:06:32 crc kubenswrapper[4680]: I0217 19:06:32.233455 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0426f7e-9801-45ad-a7d6-06dca08aa27b-kube-api-access-gn7ks" (OuterVolumeSpecName: "kube-api-access-gn7ks") pod "d0426f7e-9801-45ad-a7d6-06dca08aa27b" (UID: "d0426f7e-9801-45ad-a7d6-06dca08aa27b"). InnerVolumeSpecName "kube-api-access-gn7ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 19:06:32 crc kubenswrapper[4680]: I0217 19:06:32.315470 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn7ks\" (UniqueName: \"kubernetes.io/projected/d0426f7e-9801-45ad-a7d6-06dca08aa27b-kube-api-access-gn7ks\") on node \"crc\" DevicePath \"\"" Feb 17 19:06:33 crc kubenswrapper[4680]: I0217 19:06:33.013096 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a73b940f4e66bef1186152b1757175a090a627f67feb49047e7053ec2648e13" Feb 17 19:06:33 crc kubenswrapper[4680]: I0217 19:06:33.013151 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrvkc/crc-debug-tgwb6" Feb 17 19:06:33 crc kubenswrapper[4680]: I0217 19:06:33.119057 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0426f7e-9801-45ad-a7d6-06dca08aa27b" path="/var/lib/kubelet/pods/d0426f7e-9801-45ad-a7d6-06dca08aa27b/volumes" Feb 17 19:06:33 crc kubenswrapper[4680]: I0217 19:06:33.304250 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rrvkc/crc-debug-sc2jc"] Feb 17 19:06:33 crc kubenswrapper[4680]: E0217 19:06:33.304907 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0426f7e-9801-45ad-a7d6-06dca08aa27b" containerName="container-00" Feb 17 19:06:33 crc kubenswrapper[4680]: I0217 19:06:33.304924 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0426f7e-9801-45ad-a7d6-06dca08aa27b" containerName="container-00" Feb 17 19:06:33 crc kubenswrapper[4680]: I0217 19:06:33.305095 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0426f7e-9801-45ad-a7d6-06dca08aa27b" containerName="container-00" Feb 17 19:06:33 crc kubenswrapper[4680]: I0217 19:06:33.305728 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrvkc/crc-debug-sc2jc" Feb 17 19:06:33 crc kubenswrapper[4680]: I0217 19:06:33.435064 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jj5l\" (UniqueName: \"kubernetes.io/projected/25bc4637-dc8c-4936-95db-d6a75732c895-kube-api-access-4jj5l\") pod \"crc-debug-sc2jc\" (UID: \"25bc4637-dc8c-4936-95db-d6a75732c895\") " pod="openshift-must-gather-rrvkc/crc-debug-sc2jc" Feb 17 19:06:33 crc kubenswrapper[4680]: I0217 19:06:33.435203 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/25bc4637-dc8c-4936-95db-d6a75732c895-host\") pod \"crc-debug-sc2jc\" (UID: \"25bc4637-dc8c-4936-95db-d6a75732c895\") " pod="openshift-must-gather-rrvkc/crc-debug-sc2jc" Feb 17 19:06:33 crc kubenswrapper[4680]: I0217 19:06:33.536863 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/25bc4637-dc8c-4936-95db-d6a75732c895-host\") pod \"crc-debug-sc2jc\" (UID: \"25bc4637-dc8c-4936-95db-d6a75732c895\") " pod="openshift-must-gather-rrvkc/crc-debug-sc2jc" Feb 17 19:06:33 crc kubenswrapper[4680]: I0217 19:06:33.537058 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jj5l\" (UniqueName: \"kubernetes.io/projected/25bc4637-dc8c-4936-95db-d6a75732c895-kube-api-access-4jj5l\") pod \"crc-debug-sc2jc\" (UID: \"25bc4637-dc8c-4936-95db-d6a75732c895\") " pod="openshift-must-gather-rrvkc/crc-debug-sc2jc" Feb 17 19:06:33 crc kubenswrapper[4680]: I0217 19:06:33.537267 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/25bc4637-dc8c-4936-95db-d6a75732c895-host\") pod \"crc-debug-sc2jc\" (UID: \"25bc4637-dc8c-4936-95db-d6a75732c895\") " pod="openshift-must-gather-rrvkc/crc-debug-sc2jc" Feb 17 19:06:33 crc kubenswrapper[4680]: I0217 19:06:33.617024 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jj5l\" (UniqueName: \"kubernetes.io/projected/25bc4637-dc8c-4936-95db-d6a75732c895-kube-api-access-4jj5l\") pod \"crc-debug-sc2jc\" (UID: \"25bc4637-dc8c-4936-95db-d6a75732c895\") " pod="openshift-must-gather-rrvkc/crc-debug-sc2jc" Feb 17 19:06:33 crc kubenswrapper[4680]: I0217 19:06:33.625726 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrvkc/crc-debug-sc2jc" Feb 17 19:06:34 crc kubenswrapper[4680]: I0217 19:06:34.031911 4680 generic.go:334] "Generic (PLEG): container finished" podID="25bc4637-dc8c-4936-95db-d6a75732c895" containerID="9f2e1ac3f57437ca8ac20b555ebb8f54804606d74cbe6c3b1952ece2693e478c" exitCode=0 Feb 17 19:06:34 crc kubenswrapper[4680]: I0217 19:06:34.032184 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrvkc/crc-debug-sc2jc" event={"ID":"25bc4637-dc8c-4936-95db-d6a75732c895","Type":"ContainerDied","Data":"9f2e1ac3f57437ca8ac20b555ebb8f54804606d74cbe6c3b1952ece2693e478c"} Feb 17 19:06:34 crc kubenswrapper[4680]: I0217 19:06:34.032209 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrvkc/crc-debug-sc2jc" event={"ID":"25bc4637-dc8c-4936-95db-d6a75732c895","Type":"ContainerStarted","Data":"c434951de74c29d74c22dd7c21b29672f1a5a0438531d1e32d9b75ae25fff0e6"} Feb 17 19:06:34 crc kubenswrapper[4680]: I0217 19:06:34.089699 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rrvkc/crc-debug-sc2jc"] Feb 17 19:06:34 crc kubenswrapper[4680]: I0217 19:06:34.127529 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rrvkc/crc-debug-sc2jc"] Feb 17 19:06:35 crc kubenswrapper[4680]: I0217 19:06:35.236819 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrvkc/crc-debug-sc2jc" Feb 17 19:06:35 crc kubenswrapper[4680]: I0217 19:06:35.380501 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/25bc4637-dc8c-4936-95db-d6a75732c895-host\") pod \"25bc4637-dc8c-4936-95db-d6a75732c895\" (UID: \"25bc4637-dc8c-4936-95db-d6a75732c895\") " Feb 17 19:06:35 crc kubenswrapper[4680]: I0217 19:06:35.380584 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bc4637-dc8c-4936-95db-d6a75732c895-host" (OuterVolumeSpecName: "host") pod "25bc4637-dc8c-4936-95db-d6a75732c895" (UID: "25bc4637-dc8c-4936-95db-d6a75732c895"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 19:06:35 crc kubenswrapper[4680]: I0217 19:06:35.380683 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jj5l\" (UniqueName: \"kubernetes.io/projected/25bc4637-dc8c-4936-95db-d6a75732c895-kube-api-access-4jj5l\") pod \"25bc4637-dc8c-4936-95db-d6a75732c895\" (UID: \"25bc4637-dc8c-4936-95db-d6a75732c895\") " Feb 17 19:06:35 crc kubenswrapper[4680]: I0217 19:06:35.381896 4680 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/25bc4637-dc8c-4936-95db-d6a75732c895-host\") on node \"crc\" DevicePath \"\"" Feb 17 19:06:35 crc kubenswrapper[4680]: I0217 19:06:35.392508 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25bc4637-dc8c-4936-95db-d6a75732c895-kube-api-access-4jj5l" (OuterVolumeSpecName: "kube-api-access-4jj5l") pod "25bc4637-dc8c-4936-95db-d6a75732c895" (UID: "25bc4637-dc8c-4936-95db-d6a75732c895"). InnerVolumeSpecName "kube-api-access-4jj5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 19:06:35 crc kubenswrapper[4680]: I0217 19:06:35.483396 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jj5l\" (UniqueName: \"kubernetes.io/projected/25bc4637-dc8c-4936-95db-d6a75732c895-kube-api-access-4jj5l\") on node \"crc\" DevicePath \"\"" Feb 17 19:06:36 crc kubenswrapper[4680]: I0217 19:06:36.048708 4680 scope.go:117] "RemoveContainer" containerID="9f2e1ac3f57437ca8ac20b555ebb8f54804606d74cbe6c3b1952ece2693e478c" Feb 17 19:06:36 crc kubenswrapper[4680]: I0217 19:06:36.048780 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrvkc/crc-debug-sc2jc" Feb 17 19:06:37 crc kubenswrapper[4680]: I0217 19:06:37.115700 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25bc4637-dc8c-4936-95db-d6a75732c895" path="/var/lib/kubelet/pods/25bc4637-dc8c-4936-95db-d6a75732c895/volumes" Feb 17 19:07:05 crc kubenswrapper[4680]: I0217 19:07:05.588550 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 19:07:05 crc kubenswrapper[4680]: I0217 19:07:05.589178 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 19:07:19 crc kubenswrapper[4680]: I0217 19:07:19.257630 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6dc475ddf4-vbkll_a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf/barbican-api/0.log" Feb 17 19:07:19 crc kubenswrapper[4680]: I0217 19:07:19.343600 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6dc475ddf4-vbkll_a71f4bc8-fabb-47ea-abf6-4c335bc8bdbf/barbican-api-log/0.log" Feb 17 19:07:19 crc kubenswrapper[4680]: I0217 19:07:19.488111 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8474964c54-bvtdt_27ae5442-486e-406a-ba33-bfcbe792792f/barbican-keystone-listener/0.log" Feb 17 19:07:19 crc kubenswrapper[4680]: I0217 19:07:19.942353 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8474964c54-bvtdt_27ae5442-486e-406a-ba33-bfcbe792792f/barbican-keystone-listener-log/0.log" Feb 17 19:07:19 crc kubenswrapper[4680]: I0217 19:07:19.966201 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6b7bf9567f-k8kfh_6ccb1c12-6633-49c0-879c-5eaa29017991/barbican-worker-log/0.log" Feb 17 19:07:19 crc kubenswrapper[4680]: I0217 19:07:19.981957 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6b7bf9567f-k8kfh_6ccb1c12-6633-49c0-879c-5eaa29017991/barbican-worker/0.log" Feb 17 19:07:20 crc kubenswrapper[4680]: I0217 19:07:20.235298 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7c4b1bad-b08f-4697-a925-b3b943394bdb/ceilometer-central-agent/0.log" Feb 17 19:07:20 crc kubenswrapper[4680]: I0217 19:07:20.238257 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-wk4hd_756e6a23-4101-4886-bfe8-cc4f12799fd7/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 19:07:20 crc kubenswrapper[4680]: I0217 19:07:20.401561 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7c4b1bad-b08f-4697-a925-b3b943394bdb/proxy-httpd/0.log" Feb 17 19:07:20 crc kubenswrapper[4680]: I0217 19:07:20.450158 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7c4b1bad-b08f-4697-a925-b3b943394bdb/ceilometer-notification-agent/0.log" Feb 17 19:07:20 crc kubenswrapper[4680]: I0217 19:07:20.452584 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7c4b1bad-b08f-4697-a925-b3b943394bdb/sg-core/0.log" Feb 17 19:07:20 crc kubenswrapper[4680]: I0217 19:07:20.666818 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_878b21ac-092c-4e88-ba08-f59d0b220d79/cinder-api/0.log" Feb 17 19:07:20 crc kubenswrapper[4680]: I0217 19:07:20.668918 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_878b21ac-092c-4e88-ba08-f59d0b220d79/cinder-api-log/0.log" Feb 17 19:07:20 crc kubenswrapper[4680]: I0217 19:07:20.849747 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7171f050-7893-4d68-b65a-9425776b1301/cinder-scheduler/0.log" Feb 17 19:07:20 crc kubenswrapper[4680]: I0217 19:07:20.964563 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7171f050-7893-4d68-b65a-9425776b1301/probe/0.log" Feb 17 19:07:21 crc kubenswrapper[4680]: I0217 19:07:21.549119 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-pmfpf_05d9615f-35c4-4fcc-ac20-b798ba6abc72/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 19:07:21 crc kubenswrapper[4680]: I0217 19:07:21.651232 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vr9b7_44aba16b-dafd-4ce2-8f13-01e7bb14a109/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 19:07:21 crc kubenswrapper[4680]: I0217 19:07:21.748443 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-hwrrl_44cbaec4-7f81-44c8-80f7-02818330e08f/init/0.log" Feb 17 19:07:21 crc kubenswrapper[4680]: I0217 19:07:21.928302 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-hwrrl_44cbaec4-7f81-44c8-80f7-02818330e08f/init/0.log" Feb 17 19:07:22 crc kubenswrapper[4680]: I0217 19:07:22.011105 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-sstxg_fc6f1259-7540-444b-997b-4bfdddbc6977/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 19:07:22 crc kubenswrapper[4680]: I0217 19:07:22.128367 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6ff66b85ff-hwrrl_44cbaec4-7f81-44c8-80f7-02818330e08f/dnsmasq-dns/0.log" Feb 17 19:07:22 crc kubenswrapper[4680]: I0217 19:07:22.300182 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4d07a220-fd50-489d-9df4-ba9ed7ce2c95/glance-log/0.log" Feb 17 19:07:22 crc kubenswrapper[4680]: I0217 19:07:22.310809 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4d07a220-fd50-489d-9df4-ba9ed7ce2c95/glance-httpd/0.log" Feb 17 19:07:22 crc kubenswrapper[4680]: I0217 19:07:22.493666 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c/glance-httpd/0.log" Feb 17 19:07:22 crc kubenswrapper[4680]: I0217 19:07:22.519535 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9c43bf0f-9bd3-47a4-a98c-4d34fe11ac1c/glance-log/0.log" Feb 17 19:07:22 crc kubenswrapper[4680]: I0217 19:07:22.715692 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6978d4c7cb-4k7vb_91980b55-1251-4404-a22d-1572dd91ec8a/horizon/1.log" Feb 17 19:07:22 crc kubenswrapper[4680]: I0217 19:07:22.887459 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6978d4c7cb-4k7vb_91980b55-1251-4404-a22d-1572dd91ec8a/horizon/0.log" Feb 17 19:07:22 crc kubenswrapper[4680]: I0217 19:07:22.982573 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-flssf_3cb6431e-7411-4f54-9b55-6ba091b50804/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 19:07:23 crc kubenswrapper[4680]: I0217 19:07:23.217940 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-g99q6_88ff3876-8728-4981-a3c7-0ccef82dee3e/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 19:07:23 crc kubenswrapper[4680]: I0217 19:07:23.279897 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6978d4c7cb-4k7vb_91980b55-1251-4404-a22d-1572dd91ec8a/horizon-log/0.log" Feb 17 19:07:23 crc kubenswrapper[4680]: I0217 19:07:23.591874 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29522581-mtv5h_e3138dae-021c-4ebf-8ebd-81aed0fc0499/keystone-cron/0.log" Feb 17 19:07:23 crc kubenswrapper[4680]: I0217 19:07:23.884082 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_45b4ecc2-2420-49af-9969-0799eda9ee0f/kube-state-metrics/0.log" Feb 17 19:07:23 crc kubenswrapper[4680]: I0217 19:07:23.916136 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-854d5d774d-kbx8n_7370e5a4-158b-44db-8b30-e9201681787a/keystone-api/0.log" Feb 17 19:07:24 crc kubenswrapper[4680]: I0217 19:07:24.022815 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-65z5s_d1713773-5461-439e-8d37-5c1f2b54db0a/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 19:07:24 crc kubenswrapper[4680]: I0217 19:07:24.523849 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-q7c5z_6966330b-ec2c-4a0d-a3b2-50c582e0eb69/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 19:07:24 crc kubenswrapper[4680]: I0217 19:07:24.727282 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59fd4ccc67-hf8zf_c24ed534-12dd-4bab-aac1-3ef792cd8e01/neutron-httpd/0.log" Feb 17 19:07:24 crc kubenswrapper[4680]: I0217 19:07:24.916924 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59fd4ccc67-hf8zf_c24ed534-12dd-4bab-aac1-3ef792cd8e01/neutron-api/0.log" Feb 17 19:07:25 crc kubenswrapper[4680]: I0217 19:07:25.612419 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ac239796-b255-4e18-a027-2bd872c8ccc9/nova-cell0-conductor-conductor/0.log" Feb 17 19:07:25 crc kubenswrapper[4680]: I0217 19:07:25.761394 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_2aab84dd-8ed8-46fd-931a-31759485bb02/nova-cell1-conductor-conductor/0.log" Feb 17 19:07:26 crc kubenswrapper[4680]: I0217 19:07:26.155150 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_884f4451-a521-4e0f-9fd8-5f4e9c094fb8/nova-api-log/0.log" Feb 17 19:07:26 crc kubenswrapper[4680]: I0217 19:07:26.186283 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6c1fc559-7f55-4c70-b448-921d56e4a6c3/nova-cell1-novncproxy-novncproxy/0.log" Feb 17 19:07:26 crc kubenswrapper[4680]: I0217 19:07:26.437705 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-plf45_111c14f3-6274-434d-a73d-09328a365bd2/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 19:07:26 crc kubenswrapper[4680]: I0217 19:07:26.595916 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e325136c-a30f-4f22-aecf-69319458b18d/nova-metadata-log/0.log" Feb 17 19:07:26 crc kubenswrapper[4680]: I0217 19:07:26.733796 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_884f4451-a521-4e0f-9fd8-5f4e9c094fb8/nova-api-api/0.log" Feb 17 19:07:27 crc kubenswrapper[4680]: I0217 19:07:27.085824 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_408a63fd-0033-4bf6-925a-cc82a63b2a60/mysql-bootstrap/0.log" Feb 17 19:07:27 crc kubenswrapper[4680]: I0217 19:07:27.271613 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_408a63fd-0033-4bf6-925a-cc82a63b2a60/mysql-bootstrap/0.log" Feb 17 19:07:27 crc kubenswrapper[4680]: I0217 19:07:27.367927 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_408a63fd-0033-4bf6-925a-cc82a63b2a60/galera/0.log" Feb 17 19:07:27 crc kubenswrapper[4680]: I0217 19:07:27.425141 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_13bf0e76-39eb-44b7-876e-76e06523f968/nova-scheduler-scheduler/0.log" Feb 17 19:07:27 crc kubenswrapper[4680]: I0217 19:07:27.603899 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_92bcbcf3-8caf-4493-99ac-478e3d55a1c1/mysql-bootstrap/0.log" Feb 17 19:07:27 crc kubenswrapper[4680]: I0217 19:07:27.882179 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_92bcbcf3-8caf-4493-99ac-478e3d55a1c1/galera/0.log" Feb 17 19:07:27 crc kubenswrapper[4680]: I0217 19:07:27.901571 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_92bcbcf3-8caf-4493-99ac-478e3d55a1c1/mysql-bootstrap/0.log" Feb 17 19:07:28 crc kubenswrapper[4680]: I0217 19:07:28.171758 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a0986f8d-da98-4f6e-bd4e-457e4f405022/openstackclient/0.log" Feb 17 19:07:28 crc kubenswrapper[4680]: I0217 19:07:28.180662 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7dz9r_2ce593ef-6a97-4361-af72-7637711c07b5/ovn-controller/0.log" Feb 17 19:07:28 crc kubenswrapper[4680]: I0217 19:07:28.412785 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-tw5w5_42bb122a-c2c2-4a70-a745-1a6bb9334698/openstack-network-exporter/0.log" Feb 17 19:07:28 crc kubenswrapper[4680]: I0217 19:07:28.652544 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-grk2z_8f069d89-4464-4526-ba2c-5d4a9de13e02/ovsdb-server-init/0.log" Feb 17 19:07:28 crc kubenswrapper[4680]: I0217 19:07:28.671730 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e325136c-a30f-4f22-aecf-69319458b18d/nova-metadata-metadata/0.log" Feb 17 19:07:28 crc kubenswrapper[4680]: I0217 19:07:28.935937 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-grk2z_8f069d89-4464-4526-ba2c-5d4a9de13e02/ovsdb-server/0.log" Feb 17 19:07:28 crc kubenswrapper[4680]: I0217 19:07:28.937123 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-grk2z_8f069d89-4464-4526-ba2c-5d4a9de13e02/ovsdb-server-init/0.log" Feb 17 19:07:28 crc kubenswrapper[4680]: I0217 19:07:28.993022 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-grk2z_8f069d89-4464-4526-ba2c-5d4a9de13e02/ovs-vswitchd/0.log" Feb 17 19:07:29 crc kubenswrapper[4680]: I0217 19:07:29.267502 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ffab513d-4164-4043-a65b-b72c0b1a67a4/openstack-network-exporter/0.log" Feb 17 19:07:29 crc kubenswrapper[4680]: I0217 19:07:29.309119 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-shzfm_c501ee0c-b4b4-4517-a946-acea84eaf4e9/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 19:07:29 crc kubenswrapper[4680]: I0217 19:07:29.312543 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ffab513d-4164-4043-a65b-b72c0b1a67a4/ovn-northd/0.log" Feb 17 19:07:29 crc kubenswrapper[4680]: I0217 19:07:29.602823 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6ba0d348-8721-4d4f-9904-994b1ae56423/openstack-network-exporter/0.log" Feb 17 19:07:29 crc kubenswrapper[4680]: I0217 19:07:29.623373 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6ba0d348-8721-4d4f-9904-994b1ae56423/ovsdbserver-nb/0.log" Feb 17 19:07:29 crc kubenswrapper[4680]: I0217 19:07:29.806124 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e083ffca-d900-4b1a-bc3a-eacae2a81902/ovsdbserver-sb/0.log" Feb 17 19:07:29 crc kubenswrapper[4680]: I0217 19:07:29.846979 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e083ffca-d900-4b1a-bc3a-eacae2a81902/openstack-network-exporter/0.log" Feb 17 19:07:30 crc kubenswrapper[4680]: I0217 19:07:30.249196 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8ddff1fb-6281-40d0-82dd-7f395cd9d3dd/setup-container/0.log" Feb 17 19:07:30 crc kubenswrapper[4680]: I0217 19:07:30.263853 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c7f4dffcd-6k6kn_48b0173d-34ae-465b-8119-b84227984ccd/placement-api/0.log" Feb 17 19:07:30 crc kubenswrapper[4680]: I0217 19:07:30.434464 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c7f4dffcd-6k6kn_48b0173d-34ae-465b-8119-b84227984ccd/placement-log/0.log" Feb 17 19:07:30 crc kubenswrapper[4680]: I0217 19:07:30.458201 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8ddff1fb-6281-40d0-82dd-7f395cd9d3dd/setup-container/0.log" Feb 17 19:07:30 crc kubenswrapper[4680]: I0217 19:07:30.522923 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8ddff1fb-6281-40d0-82dd-7f395cd9d3dd/rabbitmq/0.log" Feb 17 19:07:30 crc kubenswrapper[4680]: I0217 19:07:30.940828 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f5b94b27-be97-48aa-a421-49c29363f388/setup-container/0.log" Feb 17 19:07:31 crc kubenswrapper[4680]: I0217 19:07:31.131568 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f5b94b27-be97-48aa-a421-49c29363f388/setup-container/0.log" Feb 17 19:07:31 crc kubenswrapper[4680]: I0217 19:07:31.199704 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-tcn7b_aaa35e71-4185-4074-8f0a-eb601168a788/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 19:07:31 crc kubenswrapper[4680]: I0217 19:07:31.216680 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f5b94b27-be97-48aa-a421-49c29363f388/rabbitmq/0.log" Feb 17 19:07:31 crc kubenswrapper[4680]: I0217 19:07:31.455897 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-trq4p_4dbcac70-066d-4d71-8f2d-f8762e241236/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 19:07:31 crc kubenswrapper[4680]: I0217 19:07:31.499492 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-4gbv8_b65551bb-7084-4453-8791-0a12d9e928e4/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 19:07:31 crc kubenswrapper[4680]: I0217 19:07:31.726515 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-h76n9_5731e4b0-b432-4c63-aa65-59eed4740731/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 19:07:31 crc kubenswrapper[4680]: I0217 19:07:31.811172 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-5gb6z_ad93d3c0-6115-499e-b0db-6840bda12a02/ssh-known-hosts-edpm-deployment/0.log" Feb 17 19:07:32 crc kubenswrapper[4680]: I0217 19:07:32.107407 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5c85cf9547-qhbrg_fc8411d6-1f53-4a7f-8528-0b3719bc5a6f/proxy-server/0.log" Feb 17 19:07:32 crc kubenswrapper[4680]: I0217 19:07:32.164088 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5c85cf9547-qhbrg_fc8411d6-1f53-4a7f-8528-0b3719bc5a6f/proxy-httpd/0.log" Feb 17 19:07:32 crc kubenswrapper[4680]: I0217 19:07:32.306271 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-b9q8k_e281ffae-b948-40c0-b8a1-68805d081000/swift-ring-rebalance/0.log" Feb 17 19:07:32 crc kubenswrapper[4680]: I0217 19:07:32.428836 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/account-auditor/0.log" Feb 17 19:07:32 crc kubenswrapper[4680]: I0217 19:07:32.505591 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/account-reaper/0.log" Feb 17 19:07:32 crc kubenswrapper[4680]: I0217 19:07:32.656313 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/container-auditor/0.log" Feb 17 19:07:32 crc kubenswrapper[4680]: I0217 19:07:32.665718 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/account-replicator/0.log" Feb 17 19:07:32 crc kubenswrapper[4680]: I0217 19:07:32.669302 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/account-server/0.log" Feb 17 19:07:32 crc kubenswrapper[4680]: I0217 19:07:32.804642 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/container-replicator/0.log" Feb 17 19:07:32 crc kubenswrapper[4680]: I0217 19:07:32.876226 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/container-server/0.log" Feb 17 19:07:32 crc kubenswrapper[4680]: I0217 19:07:32.932008 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/container-updater/0.log" Feb 17 19:07:32 crc kubenswrapper[4680]: I0217 19:07:32.957274 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/object-auditor/0.log" Feb 17 19:07:33 crc kubenswrapper[4680]: I0217 19:07:33.032180 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/object-expirer/0.log" Feb 17 19:07:33 crc kubenswrapper[4680]: I0217 19:07:33.493577 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/object-server/0.log" Feb 17 19:07:33 crc kubenswrapper[4680]: I0217 19:07:33.496270 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/object-updater/0.log" Feb 17 19:07:33 crc kubenswrapper[4680]: I0217 19:07:33.519425 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/object-replicator/0.log" Feb 17 19:07:33 crc kubenswrapper[4680]: I0217 19:07:33.750951 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/rsync/0.log" Feb 17 19:07:33 crc kubenswrapper[4680]: I0217 19:07:33.835015 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54c53390-eb8a-4a06-9d50-7d0a4f10d3df/swift-recon-cron/0.log" Feb 17 19:07:33 crc kubenswrapper[4680]: I0217 19:07:33.960242 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-grdvz_030c3571-afbe-48b3-9a59-890b63527ab7/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 19:07:34 crc kubenswrapper[4680]: I0217 19:07:34.147572 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_327c58a4-13db-4bf2-b6e4-755c23c2a36f/tempest-tests-tempest-tests-runner/0.log" Feb 17 19:07:34 crc kubenswrapper[4680]: I0217 19:07:34.219000 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_8d217e59-e2e9-4cfa-a272-5cc6c543347f/test-operator-logs-container/0.log" Feb 17 19:07:34 crc kubenswrapper[4680]: I0217 19:07:34.433152 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-4g8lm_f8db14b6-26ac-4b47-a820-ff6f4ce152dd/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 19:07:35 crc kubenswrapper[4680]: I0217 19:07:35.589355 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 19:07:35 crc kubenswrapper[4680]: I0217 19:07:35.589749 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 19:07:46 crc kubenswrapper[4680]: I0217 19:07:46.134983 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_391eec5a-7bea-47ed-963a-8152e1db2357/memcached/0.log" Feb 17 19:08:05 crc kubenswrapper[4680]: I0217 19:08:05.588633 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 19:08:05 crc kubenswrapper[4680]: I0217 19:08:05.589346 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 19:08:05 crc kubenswrapper[4680]: I0217 19:08:05.589437 4680 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" Feb 17 19:08:05 crc kubenswrapper[4680]: I0217 19:08:05.590636 4680 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22"} pod="openshift-machine-config-operator/machine-config-daemon-rfw56" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 19:08:05 crc kubenswrapper[4680]: I0217 19:08:05.590771 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" containerID="cri-o://510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" gracePeriod=600 Feb 17 19:08:05 crc kubenswrapper[4680]: E0217 19:08:05.833521 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:08:05 crc kubenswrapper[4680]: I0217 19:08:05.870943 4680 generic.go:334] "Generic (PLEG): container finished" podID="8dd08058-016e-4607-8be6-40463674c2fe" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" exitCode=0 Feb 17 19:08:05 crc kubenswrapper[4680]: I0217 19:08:05.870994 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerDied","Data":"510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22"} Feb 17 19:08:05 crc kubenswrapper[4680]: I0217 19:08:05.871041 4680 scope.go:117] "RemoveContainer" containerID="beb82b8d29ac7889c1ee8f92291d8e0e0a4700c02477e2b7569bdbf8e3b1e375" Feb 17 19:08:05 crc kubenswrapper[4680]: I0217 19:08:05.871701 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:08:05 crc kubenswrapper[4680]: E0217 19:08:05.871932 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:08:08 crc kubenswrapper[4680]: I0217 19:08:08.049758 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr_ad62ce23-e261-4066-9465-29efa5b1eb6d/util/0.log" Feb 17 19:08:08 crc kubenswrapper[4680]: I0217 19:08:08.266673 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr_ad62ce23-e261-4066-9465-29efa5b1eb6d/util/0.log" Feb 17 19:08:08 crc kubenswrapper[4680]: I0217 19:08:08.353767 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr_ad62ce23-e261-4066-9465-29efa5b1eb6d/pull/0.log" Feb 17 19:08:08 crc kubenswrapper[4680]: I0217 19:08:08.371341 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr_ad62ce23-e261-4066-9465-29efa5b1eb6d/pull/0.log" Feb 17 19:08:08 crc kubenswrapper[4680]: I0217 19:08:08.565589 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr_ad62ce23-e261-4066-9465-29efa5b1eb6d/extract/0.log" Feb 17 19:08:08 crc kubenswrapper[4680]: I0217 19:08:08.591484 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr_ad62ce23-e261-4066-9465-29efa5b1eb6d/util/0.log" Feb 17 19:08:08 crc kubenswrapper[4680]: I0217 19:08:08.608843 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6058dfda14cb65c58e6e1b1bf4650548f92122c2af59420cb44b4d3438h69wr_ad62ce23-e261-4066-9465-29efa5b1eb6d/pull/0.log" Feb 17 19:08:09 crc kubenswrapper[4680]: I0217 19:08:09.196305 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-64dzx_5e432327-20d1-4688-8dda-173b62e27110/manager/0.log" Feb 17 19:08:09 crc kubenswrapper[4680]: I0217 19:08:09.652871 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-zq66h_3a32bda7-58e5-494b-af42-d255934537ac/manager/0.log" Feb 17 19:08:09 crc kubenswrapper[4680]: I0217 19:08:09.866559 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-mqdqj_dd0b1acf-7e52-47f1-9fa3-c82be1b7ce8f/manager/0.log" Feb 17 19:08:10 crc kubenswrapper[4680]: I0217 19:08:10.212628 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-wpzsn_85f8c95f-eecb-465c-be1f-c311edcbe597/manager/0.log" Feb 17 19:08:10 crc kubenswrapper[4680]: I0217 19:08:10.834761 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-fnnbk_25257448-9b91-4b64-9279-e3e51e752019/manager/0.log" Feb 17 19:08:11 crc kubenswrapper[4680]: I0217 19:08:11.132850 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-ff5c8777-ft44h_a61ed4b7-335a-429e-8c5a-17b2d322c78e/manager/0.log" Feb 17 19:08:11 crc kubenswrapper[4680]: I0217 19:08:11.540129 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-xqwqr_09667f14-d7ef-4502-9880-b0f14e2247b8/manager/0.log" Feb 17 19:08:11 crc kubenswrapper[4680]: I0217 19:08:11.637368 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-nbwj4_7030ebc6-bc71-4388-97c3-14403919b886/manager/0.log" Feb 17 19:08:11 crc kubenswrapper[4680]: I0217 19:08:11.768765 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-bxxqw_01a45563-cabc-4dbc-80cb-49b274ab96cd/manager/0.log" Feb 17 19:08:11 crc kubenswrapper[4680]: I0217 19:08:11.985984 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-th6kc_27bd44a3-f6bc-462d-92a4-53287bd71b8f/manager/0.log" Feb 17 19:08:12 crc kubenswrapper[4680]: I0217 19:08:12.251671 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-n45mv_44afd1e8-ecdc-4854-bafa-d2bdabe26c5b/manager/0.log" Feb 17 19:08:13 crc kubenswrapper[4680]: I0217 19:08:13.055696 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-jpnmh_73c67b77-ecaf-4925-aba5-855dc368e729/manager/0.log" Feb 17 19:08:13 crc kubenswrapper[4680]: I0217 19:08:13.333462 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84966cf5c4r5wnv_3a909135-a1d3-4a69-875d-74f19de59cf1/manager/0.log" Feb 17 19:08:13 crc kubenswrapper[4680]: I0217 19:08:13.695286 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5498fb9db9-fg6q4_c343adc1-0596-4313-9d07-e7a140130045/operator/0.log" Feb 17 19:08:14 crc kubenswrapper[4680]: I0217 19:08:14.019630 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cmvdn_33ad8a07-6bf1-4c03-a44f-5462b01d5a82/registry-server/0.log" Feb 17 19:08:14 crc kubenswrapper[4680]: I0217 19:08:14.470669 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-htc8b_36e854c8-4742-4138-8766-6614fa005bae/manager/0.log" Feb 17 19:08:14 crc kubenswrapper[4680]: I0217 19:08:14.786273 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-rw6wk_eeb4f8c1-4744-4e7f-a8a4-a2deb95f4b45/manager/0.log" Feb 17 19:08:14 crc kubenswrapper[4680]: I0217 19:08:14.932565 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-6k4kw_7e8462ff-c9e3-4cff-bd4b-7f94d8acb8b1/operator/0.log" Feb 17 19:08:15 crc kubenswrapper[4680]: I0217 19:08:15.706187 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-d5n5g_41d9cacc-1317-479d-99ff-8eb7e6f3d6b6/manager/0.log" Feb 17 19:08:16 crc kubenswrapper[4680]: I0217 19:08:16.005810 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-7pstg_5663f682-3b3a-4061-8d4a-377a103c0bd4/manager/0.log" Feb 17 19:08:16 crc kubenswrapper[4680]: I0217 19:08:16.066995 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-6628n_41c69a9d-0d86-4088-9490-262b3eff9441/manager/0.log" Feb 17 19:08:16 crc kubenswrapper[4680]: I0217 19:08:16.106374 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:08:16 crc kubenswrapper[4680]: E0217 19:08:16.106598 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:08:16 crc kubenswrapper[4680]: I0217 19:08:16.110306 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-b96b9dfc9-r9bnq_1982baea-3b82-4376-ab68-6559095032d5/manager/0.log" Feb 17 19:08:16 crc kubenswrapper[4680]: I0217 19:08:16.158360 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-vgwfn_6f60616b-3009-4961-af69-11a3d3e42f4b/manager/0.log" Feb 17 19:08:16 crc kubenswrapper[4680]: I0217 19:08:16.401043 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-t8p47_827a72ee-4516-4fe2-866a-a87124fe7439/manager/0.log" Feb 17 19:08:20 crc kubenswrapper[4680]: I0217 19:08:20.495064 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-9qzbv_c4d30f2e-a54a-43ef-923f-103f4e0c54d5/manager/0.log" Feb 17 19:08:30 crc kubenswrapper[4680]: I0217 19:08:30.105669 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:08:30 crc kubenswrapper[4680]: E0217 19:08:30.107750 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:08:38 crc kubenswrapper[4680]: I0217 19:08:38.655221 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-c462h_67e5bb04-4920-4f51-a729-0af77b16490e/control-plane-machine-set-operator/0.log" Feb 17 19:08:39 crc kubenswrapper[4680]: I0217 19:08:39.126020 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vhvpq_3abf57a9-2bd6-4f7f-ba75-5fe928201df8/kube-rbac-proxy/0.log" Feb 17 19:08:39 crc kubenswrapper[4680]: I0217 19:08:39.146011 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vhvpq_3abf57a9-2bd6-4f7f-ba75-5fe928201df8/machine-api-operator/0.log" Feb 17 19:08:45 crc kubenswrapper[4680]: I0217 19:08:45.105680 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:08:45 crc kubenswrapper[4680]: E0217 19:08:45.106615 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:08:55 crc kubenswrapper[4680]: I0217 19:08:55.759169 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-6mmb4_07caac96-9977-4dd7-9220-4ba6d1934a4c/cert-manager-controller/0.log" Feb 17 19:08:56 crc kubenswrapper[4680]: I0217 19:08:56.195037 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-r5b7s_0b2cbbee-856e-43b8-afa2-e3cbd86cf3cc/cert-manager-cainjector/0.log" Feb 17 19:08:56 crc kubenswrapper[4680]: I0217 19:08:56.236168 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-h4nrt_98cd155c-93dd-44c8-b05b-b3375c0eb430/cert-manager-webhook/0.log" Feb 17 19:08:57 crc kubenswrapper[4680]: I0217 19:08:57.105164 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:08:57 crc kubenswrapper[4680]: E0217 19:08:57.105484 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:09:08 crc kubenswrapper[4680]: I0217 19:09:08.106248 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:09:08 crc kubenswrapper[4680]: E0217 19:09:08.106918 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:09:10 crc kubenswrapper[4680]: I0217 19:09:10.923665 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-bqn9b_d11f19f3-34a3-42d7-b206-4c4377f903c1/nmstate-console-plugin/0.log" Feb 17 19:09:11 crc kubenswrapper[4680]: I0217 19:09:11.093575 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-nh5lq_853da1fb-12a1-4f09-969f-9e8e58ebafc1/nmstate-handler/0.log" Feb 17 19:09:11 crc kubenswrapper[4680]: I0217 19:09:11.190929 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-88gd2_2e5d7226-10e8-4b3f-b5ef-3d976c1f8462/kube-rbac-proxy/0.log" Feb 17 19:09:11 crc kubenswrapper[4680]: I0217 19:09:11.225155 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-88gd2_2e5d7226-10e8-4b3f-b5ef-3d976c1f8462/nmstate-metrics/0.log" Feb 17 19:09:11 crc kubenswrapper[4680]: I0217 19:09:11.384595 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-nhzd8_5b484a76-ef98-43ef-9c15-0d2f86ce23a6/nmstate-operator/0.log" Feb 17 19:09:12 crc kubenswrapper[4680]: I0217 19:09:12.040145 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-hffc9_35eb3c36-116b-4a21-9d5f-0a4c28011eb4/nmstate-webhook/0.log" Feb 17 19:09:13 crc kubenswrapper[4680]: I0217 19:09:13.050666 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-52lpf"] Feb 17 19:09:13 crc kubenswrapper[4680]: E0217 19:09:13.051296 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bc4637-dc8c-4936-95db-d6a75732c895" containerName="container-00" Feb 17 19:09:13 crc kubenswrapper[4680]: I0217 19:09:13.051308 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bc4637-dc8c-4936-95db-d6a75732c895" containerName="container-00" Feb 17 19:09:13 crc kubenswrapper[4680]: I0217 19:09:13.051489 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bc4637-dc8c-4936-95db-d6a75732c895" containerName="container-00" Feb 17 19:09:13 crc kubenswrapper[4680]: I0217 19:09:13.052818 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52lpf" Feb 17 19:09:13 crc kubenswrapper[4680]: I0217 19:09:13.069028 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-52lpf"] Feb 17 19:09:13 crc kubenswrapper[4680]: I0217 19:09:13.208625 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-utilities\") pod \"certified-operators-52lpf\" (UID: \"7199bf44-9fdd-4e04-849b-a0a342cd4c0d\") " pod="openshift-marketplace/certified-operators-52lpf" Feb 17 19:09:13 crc kubenswrapper[4680]: I0217 19:09:13.208742 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l67zb\" (UniqueName: \"kubernetes.io/projected/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-kube-api-access-l67zb\") pod \"certified-operators-52lpf\" (UID: \"7199bf44-9fdd-4e04-849b-a0a342cd4c0d\") " pod="openshift-marketplace/certified-operators-52lpf" Feb 17 19:09:13 crc kubenswrapper[4680]: I0217 19:09:13.208896 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-catalog-content\") pod \"certified-operators-52lpf\" (UID: \"7199bf44-9fdd-4e04-849b-a0a342cd4c0d\") " pod="openshift-marketplace/certified-operators-52lpf" Feb 17 19:09:13 crc kubenswrapper[4680]: I0217 19:09:13.310515 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-catalog-content\") pod \"certified-operators-52lpf\" (UID: \"7199bf44-9fdd-4e04-849b-a0a342cd4c0d\") " pod="openshift-marketplace/certified-operators-52lpf" Feb 17 19:09:13 crc kubenswrapper[4680]: I0217 19:09:13.310576 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-utilities\") pod \"certified-operators-52lpf\" (UID: \"7199bf44-9fdd-4e04-849b-a0a342cd4c0d\") " pod="openshift-marketplace/certified-operators-52lpf" Feb 17 19:09:13 crc kubenswrapper[4680]: I0217 19:09:13.310635 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l67zb\" (UniqueName: \"kubernetes.io/projected/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-kube-api-access-l67zb\") pod \"certified-operators-52lpf\" (UID: \"7199bf44-9fdd-4e04-849b-a0a342cd4c0d\") " pod="openshift-marketplace/certified-operators-52lpf" Feb 17 19:09:13 crc kubenswrapper[4680]: I0217 19:09:13.311434 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-catalog-content\") pod \"certified-operators-52lpf\" (UID: \"7199bf44-9fdd-4e04-849b-a0a342cd4c0d\") " pod="openshift-marketplace/certified-operators-52lpf" Feb 17 19:09:13 crc kubenswrapper[4680]: I0217 19:09:13.311655 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-utilities\") pod \"certified-operators-52lpf\" (UID: \"7199bf44-9fdd-4e04-849b-a0a342cd4c0d\") " pod="openshift-marketplace/certified-operators-52lpf" Feb 17 19:09:13 crc kubenswrapper[4680]: I0217 19:09:13.337209 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l67zb\" (UniqueName: \"kubernetes.io/projected/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-kube-api-access-l67zb\") pod \"certified-operators-52lpf\" (UID: \"7199bf44-9fdd-4e04-849b-a0a342cd4c0d\") " pod="openshift-marketplace/certified-operators-52lpf" Feb 17 19:09:13 crc kubenswrapper[4680]: I0217 19:09:13.371050 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52lpf" Feb 17 19:09:13 crc kubenswrapper[4680]: I0217 19:09:13.886880 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-52lpf"] Feb 17 19:09:14 crc kubenswrapper[4680]: I0217 19:09:14.490276 4680 generic.go:334] "Generic (PLEG): container finished" podID="7199bf44-9fdd-4e04-849b-a0a342cd4c0d" containerID="a3d6fcccea931075533ab8163f9803f5a28c25661e8577e1938e47fdff41d5cb" exitCode=0 Feb 17 19:09:14 crc kubenswrapper[4680]: I0217 19:09:14.490597 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52lpf" event={"ID":"7199bf44-9fdd-4e04-849b-a0a342cd4c0d","Type":"ContainerDied","Data":"a3d6fcccea931075533ab8163f9803f5a28c25661e8577e1938e47fdff41d5cb"} Feb 17 19:09:14 crc kubenswrapper[4680]: I0217 19:09:14.490623 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52lpf" event={"ID":"7199bf44-9fdd-4e04-849b-a0a342cd4c0d","Type":"ContainerStarted","Data":"dffbabbf74aa00d03d3488f2382cfc5202cac226e1d602708eb3212ed49bbb62"} Feb 17 19:09:15 crc kubenswrapper[4680]: I0217 19:09:15.498746 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52lpf" event={"ID":"7199bf44-9fdd-4e04-849b-a0a342cd4c0d","Type":"ContainerStarted","Data":"56f07bfd0248bc552c0f58a19f754b39c2e018f12d3626c331158a57c13a28b9"} Feb 17 19:09:17 crc kubenswrapper[4680]: I0217 19:09:17.517072 4680 generic.go:334] "Generic (PLEG): container finished" podID="7199bf44-9fdd-4e04-849b-a0a342cd4c0d" containerID="56f07bfd0248bc552c0f58a19f754b39c2e018f12d3626c331158a57c13a28b9" exitCode=0 Feb 17 19:09:17 crc kubenswrapper[4680]: I0217 19:09:17.517168 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52lpf" event={"ID":"7199bf44-9fdd-4e04-849b-a0a342cd4c0d","Type":"ContainerDied","Data":"56f07bfd0248bc552c0f58a19f754b39c2e018f12d3626c331158a57c13a28b9"} Feb 17 19:09:18 crc kubenswrapper[4680]: I0217 19:09:18.528644 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52lpf" event={"ID":"7199bf44-9fdd-4e04-849b-a0a342cd4c0d","Type":"ContainerStarted","Data":"60cebeb777196bb7f130ee4ca9a2052d619c9f9e92af531e4e86bd88816af689"} Feb 17 19:09:18 crc kubenswrapper[4680]: I0217 19:09:18.549185 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-52lpf" podStartSLOduration=2.166388565 podStartE2EDuration="5.549169427s" podCreationTimestamp="2026-02-17 19:09:13 +0000 UTC" firstStartedPulling="2026-02-17 19:09:14.491844219 +0000 UTC m=+5084.042742898" lastFinishedPulling="2026-02-17 19:09:17.874625081 +0000 UTC m=+5087.425523760" observedRunningTime="2026-02-17 19:09:18.546987118 +0000 UTC m=+5088.097885797" watchObservedRunningTime="2026-02-17 19:09:18.549169427 +0000 UTC m=+5088.100068096" Feb 17 19:09:22 crc kubenswrapper[4680]: I0217 19:09:22.105066 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:09:22 crc kubenswrapper[4680]: E0217 19:09:22.105627 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:09:23 crc kubenswrapper[4680]: I0217 19:09:23.371804 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-52lpf" Feb 17 19:09:23 crc kubenswrapper[4680]: I0217 19:09:23.372126 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-52lpf" Feb 17 19:09:23 crc kubenswrapper[4680]: I0217 19:09:23.424392 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-52lpf" Feb 17 19:09:23 crc kubenswrapper[4680]: I0217 19:09:23.629207 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-52lpf" Feb 17 19:09:23 crc kubenswrapper[4680]: I0217 19:09:23.704389 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-52lpf"] Feb 17 19:09:25 crc kubenswrapper[4680]: I0217 19:09:25.583522 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-52lpf" podUID="7199bf44-9fdd-4e04-849b-a0a342cd4c0d" containerName="registry-server" containerID="cri-o://60cebeb777196bb7f130ee4ca9a2052d619c9f9e92af531e4e86bd88816af689" gracePeriod=2 Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.115779 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52lpf" Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.261806 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-utilities\") pod \"7199bf44-9fdd-4e04-849b-a0a342cd4c0d\" (UID: \"7199bf44-9fdd-4e04-849b-a0a342cd4c0d\") " Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.261876 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l67zb\" (UniqueName: \"kubernetes.io/projected/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-kube-api-access-l67zb\") pod \"7199bf44-9fdd-4e04-849b-a0a342cd4c0d\" (UID: \"7199bf44-9fdd-4e04-849b-a0a342cd4c0d\") " Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.261927 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-catalog-content\") pod \"7199bf44-9fdd-4e04-849b-a0a342cd4c0d\" (UID: \"7199bf44-9fdd-4e04-849b-a0a342cd4c0d\") " Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.264625 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-utilities" (OuterVolumeSpecName: "utilities") pod "7199bf44-9fdd-4e04-849b-a0a342cd4c0d" (UID: "7199bf44-9fdd-4e04-849b-a0a342cd4c0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.268792 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-kube-api-access-l67zb" (OuterVolumeSpecName: "kube-api-access-l67zb") pod "7199bf44-9fdd-4e04-849b-a0a342cd4c0d" (UID: "7199bf44-9fdd-4e04-849b-a0a342cd4c0d"). InnerVolumeSpecName "kube-api-access-l67zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.320362 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7199bf44-9fdd-4e04-849b-a0a342cd4c0d" (UID: "7199bf44-9fdd-4e04-849b-a0a342cd4c0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.364852 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.364920 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l67zb\" (UniqueName: \"kubernetes.io/projected/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-kube-api-access-l67zb\") on node \"crc\" DevicePath \"\"" Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.364937 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7199bf44-9fdd-4e04-849b-a0a342cd4c0d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.592472 4680 generic.go:334] "Generic (PLEG): container finished" podID="7199bf44-9fdd-4e04-849b-a0a342cd4c0d" containerID="60cebeb777196bb7f130ee4ca9a2052d619c9f9e92af531e4e86bd88816af689" exitCode=0 Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.592543 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52lpf" Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.592544 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52lpf" event={"ID":"7199bf44-9fdd-4e04-849b-a0a342cd4c0d","Type":"ContainerDied","Data":"60cebeb777196bb7f130ee4ca9a2052d619c9f9e92af531e4e86bd88816af689"} Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.592983 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52lpf" event={"ID":"7199bf44-9fdd-4e04-849b-a0a342cd4c0d","Type":"ContainerDied","Data":"dffbabbf74aa00d03d3488f2382cfc5202cac226e1d602708eb3212ed49bbb62"} Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.593016 4680 scope.go:117] "RemoveContainer" containerID="60cebeb777196bb7f130ee4ca9a2052d619c9f9e92af531e4e86bd88816af689" Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.618290 4680 scope.go:117] "RemoveContainer" containerID="56f07bfd0248bc552c0f58a19f754b39c2e018f12d3626c331158a57c13a28b9" Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.632334 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-52lpf"] Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.645154 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-52lpf"] Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.662616 4680 scope.go:117] "RemoveContainer" containerID="a3d6fcccea931075533ab8163f9803f5a28c25661e8577e1938e47fdff41d5cb" Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.728164 4680 scope.go:117] "RemoveContainer" containerID="60cebeb777196bb7f130ee4ca9a2052d619c9f9e92af531e4e86bd88816af689" Feb 17 19:09:26 crc kubenswrapper[4680]: E0217 19:09:26.728678 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60cebeb777196bb7f130ee4ca9a2052d619c9f9e92af531e4e86bd88816af689\": container with ID starting with 60cebeb777196bb7f130ee4ca9a2052d619c9f9e92af531e4e86bd88816af689 not found: ID does not exist" containerID="60cebeb777196bb7f130ee4ca9a2052d619c9f9e92af531e4e86bd88816af689" Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.728725 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60cebeb777196bb7f130ee4ca9a2052d619c9f9e92af531e4e86bd88816af689"} err="failed to get container status \"60cebeb777196bb7f130ee4ca9a2052d619c9f9e92af531e4e86bd88816af689\": rpc error: code = NotFound desc = could not find container \"60cebeb777196bb7f130ee4ca9a2052d619c9f9e92af531e4e86bd88816af689\": container with ID starting with 60cebeb777196bb7f130ee4ca9a2052d619c9f9e92af531e4e86bd88816af689 not found: ID does not exist" Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.728749 4680 scope.go:117] "RemoveContainer" containerID="56f07bfd0248bc552c0f58a19f754b39c2e018f12d3626c331158a57c13a28b9" Feb 17 19:09:26 crc kubenswrapper[4680]: E0217 19:09:26.729398 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56f07bfd0248bc552c0f58a19f754b39c2e018f12d3626c331158a57c13a28b9\": container with ID starting with 56f07bfd0248bc552c0f58a19f754b39c2e018f12d3626c331158a57c13a28b9 not found: ID does not exist" containerID="56f07bfd0248bc552c0f58a19f754b39c2e018f12d3626c331158a57c13a28b9" Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.729441 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56f07bfd0248bc552c0f58a19f754b39c2e018f12d3626c331158a57c13a28b9"} err="failed to get container status \"56f07bfd0248bc552c0f58a19f754b39c2e018f12d3626c331158a57c13a28b9\": rpc error: code = NotFound desc = could not find container \"56f07bfd0248bc552c0f58a19f754b39c2e018f12d3626c331158a57c13a28b9\": container with ID starting with 56f07bfd0248bc552c0f58a19f754b39c2e018f12d3626c331158a57c13a28b9 not found: ID does not exist" Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.729467 4680 scope.go:117] "RemoveContainer" containerID="a3d6fcccea931075533ab8163f9803f5a28c25661e8577e1938e47fdff41d5cb" Feb 17 19:09:26 crc kubenswrapper[4680]: E0217 19:09:26.729921 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3d6fcccea931075533ab8163f9803f5a28c25661e8577e1938e47fdff41d5cb\": container with ID starting with a3d6fcccea931075533ab8163f9803f5a28c25661e8577e1938e47fdff41d5cb not found: ID does not exist" containerID="a3d6fcccea931075533ab8163f9803f5a28c25661e8577e1938e47fdff41d5cb" Feb 17 19:09:26 crc kubenswrapper[4680]: I0217 19:09:26.729954 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3d6fcccea931075533ab8163f9803f5a28c25661e8577e1938e47fdff41d5cb"} err="failed to get container status \"a3d6fcccea931075533ab8163f9803f5a28c25661e8577e1938e47fdff41d5cb\": rpc error: code = NotFound desc = could not find container \"a3d6fcccea931075533ab8163f9803f5a28c25661e8577e1938e47fdff41d5cb\": container with ID starting with a3d6fcccea931075533ab8163f9803f5a28c25661e8577e1938e47fdff41d5cb not found: ID does not exist" Feb 17 19:09:27 crc kubenswrapper[4680]: I0217 19:09:27.114880 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7199bf44-9fdd-4e04-849b-a0a342cd4c0d" path="/var/lib/kubelet/pods/7199bf44-9fdd-4e04-849b-a0a342cd4c0d/volumes" Feb 17 19:09:33 crc kubenswrapper[4680]: I0217 19:09:33.105845 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:09:33 crc kubenswrapper[4680]: E0217 19:09:33.106677 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:09:42 crc kubenswrapper[4680]: I0217 19:09:42.368722 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-fnhv4_c0f03e41-ea1c-4120-89b6-13445eeb9fd6/kube-rbac-proxy/0.log" Feb 17 19:09:42 crc kubenswrapper[4680]: I0217 19:09:42.414581 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-fnhv4_c0f03e41-ea1c-4120-89b6-13445eeb9fd6/controller/0.log" Feb 17 19:09:42 crc kubenswrapper[4680]: I0217 19:09:42.585695 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-frr-files/0.log" Feb 17 19:09:42 crc kubenswrapper[4680]: I0217 19:09:42.741233 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-metrics/0.log" Feb 17 19:09:42 crc kubenswrapper[4680]: I0217 19:09:42.742560 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-reloader/0.log" Feb 17 19:09:42 crc kubenswrapper[4680]: I0217 19:09:42.797489 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-frr-files/0.log" Feb 17 19:09:42 crc kubenswrapper[4680]: I0217 19:09:42.834201 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-reloader/0.log" Feb 17 19:09:43 crc kubenswrapper[4680]: I0217 19:09:43.016793 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-metrics/0.log" Feb 17 19:09:43 crc kubenswrapper[4680]: I0217 19:09:43.017374 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-reloader/0.log" Feb 17 19:09:43 crc kubenswrapper[4680]: I0217 19:09:43.078908 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-frr-files/0.log" Feb 17 19:09:43 crc kubenswrapper[4680]: I0217 19:09:43.080519 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-metrics/0.log" Feb 17 19:09:43 crc kubenswrapper[4680]: I0217 19:09:43.251118 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-reloader/0.log" Feb 17 19:09:43 crc kubenswrapper[4680]: I0217 19:09:43.276790 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-frr-files/0.log" Feb 17 19:09:43 crc kubenswrapper[4680]: I0217 19:09:43.296280 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/cp-metrics/0.log" Feb 17 19:09:43 crc kubenswrapper[4680]: I0217 19:09:43.331404 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/controller/0.log" Feb 17 19:09:43 crc kubenswrapper[4680]: I0217 19:09:43.527833 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/kube-rbac-proxy/0.log" Feb 17 19:09:43 crc kubenswrapper[4680]: I0217 19:09:43.543339 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/kube-rbac-proxy-frr/0.log" Feb 17 19:09:43 crc kubenswrapper[4680]: I0217 19:09:43.555928 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/frr-metrics/0.log" Feb 17 19:09:43 crc kubenswrapper[4680]: I0217 19:09:43.798304 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/reloader/0.log" Feb 17 19:09:43 crc kubenswrapper[4680]: I0217 19:09:43.849213 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-w72g5_65241e32-d4f0-4771-b88c-97d3750130e9/frr-k8s-webhook-server/0.log" Feb 17 19:09:44 crc kubenswrapper[4680]: I0217 19:09:44.066843 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-847c44f7f8-ct2vk_bcb1b2bb-a7d3-41f4-a898-c563fceb659f/manager/0.log" Feb 17 19:09:44 crc kubenswrapper[4680]: I0217 19:09:44.332022 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-c4d794488-wgkbr_07691e4d-6667-4947-a3e3-de4d4d403aa5/webhook-server/0.log" Feb 17 19:09:44 crc kubenswrapper[4680]: I0217 19:09:44.470592 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wjjw2_4bbcd402-b03a-45bc-b343-95f81671cb35/kube-rbac-proxy/0.log" Feb 17 19:09:45 crc kubenswrapper[4680]: I0217 19:09:45.079342 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wjjw2_4bbcd402-b03a-45bc-b343-95f81671cb35/speaker/0.log" Feb 17 19:09:45 crc kubenswrapper[4680]: I0217 19:09:45.094566 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ntt64_509a8b1a-75ed-4f4b-b2aa-f79f7029d7fd/frr/0.log" Feb 17 19:09:46 crc kubenswrapper[4680]: I0217 19:09:46.105903 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:09:46 crc kubenswrapper[4680]: E0217 19:09:46.106450 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:09:57 crc kubenswrapper[4680]: I0217 19:09:57.810402 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk_9c7369ac-8367-419f-b50f-82d0ba1cef96/util/0.log" Feb 17 19:09:58 crc kubenswrapper[4680]: I0217 19:09:58.051013 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk_9c7369ac-8367-419f-b50f-82d0ba1cef96/pull/0.log" Feb 17 19:09:58 crc kubenswrapper[4680]: I0217 19:09:58.051379 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk_9c7369ac-8367-419f-b50f-82d0ba1cef96/util/0.log" Feb 17 19:09:58 crc kubenswrapper[4680]: I0217 19:09:58.059946 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk_9c7369ac-8367-419f-b50f-82d0ba1cef96/pull/0.log" Feb 17 19:09:58 crc kubenswrapper[4680]: I0217 19:09:58.293698 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk_9c7369ac-8367-419f-b50f-82d0ba1cef96/extract/0.log" Feb 17 19:09:58 crc kubenswrapper[4680]: I0217 19:09:58.318564 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk_9c7369ac-8367-419f-b50f-82d0ba1cef96/pull/0.log" Feb 17 19:09:58 crc kubenswrapper[4680]: I0217 19:09:58.379372 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bj8xk_9c7369ac-8367-419f-b50f-82d0ba1cef96/util/0.log" Feb 17 19:09:58 crc kubenswrapper[4680]: I0217 19:09:58.510191 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b96s8_88dae8bc-f19a-4b03-9443-19af5340b54e/extract-utilities/0.log" Feb 17 19:09:58 crc kubenswrapper[4680]: I0217 19:09:58.707671 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b96s8_88dae8bc-f19a-4b03-9443-19af5340b54e/extract-utilities/0.log" Feb 17 19:09:58 crc kubenswrapper[4680]: I0217 19:09:58.759003 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b96s8_88dae8bc-f19a-4b03-9443-19af5340b54e/extract-content/0.log" Feb 17 19:09:58 crc kubenswrapper[4680]: I0217 19:09:58.761729 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b96s8_88dae8bc-f19a-4b03-9443-19af5340b54e/extract-content/0.log" Feb 17 19:09:58 crc kubenswrapper[4680]: I0217 19:09:58.937865 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b96s8_88dae8bc-f19a-4b03-9443-19af5340b54e/extract-content/0.log" Feb 17 19:09:58 crc kubenswrapper[4680]: I0217 19:09:58.945077 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b96s8_88dae8bc-f19a-4b03-9443-19af5340b54e/extract-utilities/0.log" Feb 17 19:09:59 crc kubenswrapper[4680]: I0217 19:09:59.184952 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-px9ln_3bb4764c-4bf2-46c1-a940-0aa75a205881/extract-utilities/0.log" Feb 17 19:09:59 crc kubenswrapper[4680]: I0217 19:09:59.547663 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b96s8_88dae8bc-f19a-4b03-9443-19af5340b54e/registry-server/0.log" Feb 17 19:09:59 crc kubenswrapper[4680]: I0217 19:09:59.635391 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-px9ln_3bb4764c-4bf2-46c1-a940-0aa75a205881/extract-utilities/0.log" Feb 17 19:09:59 crc kubenswrapper[4680]: I0217 19:09:59.693155 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-px9ln_3bb4764c-4bf2-46c1-a940-0aa75a205881/extract-content/0.log" Feb 17 19:09:59 crc kubenswrapper[4680]: I0217 19:09:59.707805 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-px9ln_3bb4764c-4bf2-46c1-a940-0aa75a205881/extract-content/0.log" Feb 17 19:09:59 crc kubenswrapper[4680]: I0217 19:09:59.845312 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-px9ln_3bb4764c-4bf2-46c1-a940-0aa75a205881/extract-utilities/0.log" Feb 17 19:09:59 crc kubenswrapper[4680]: I0217 19:09:59.862691 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-px9ln_3bb4764c-4bf2-46c1-a940-0aa75a205881/extract-content/0.log" Feb 17 19:10:00 crc kubenswrapper[4680]: I0217 19:10:00.141988 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j_41e38977-61de-46c0-8a84-9cf4de5d5156/util/0.log" Feb 17 19:10:00 crc kubenswrapper[4680]: I0217 19:10:00.421350 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j_41e38977-61de-46c0-8a84-9cf4de5d5156/util/0.log" Feb 17 19:10:00 crc kubenswrapper[4680]: I0217 19:10:00.475492 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j_41e38977-61de-46c0-8a84-9cf4de5d5156/pull/0.log" Feb 17 19:10:00 crc kubenswrapper[4680]: I0217 19:10:00.506475 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-px9ln_3bb4764c-4bf2-46c1-a940-0aa75a205881/registry-server/0.log" Feb 17 19:10:00 crc kubenswrapper[4680]: I0217 19:10:00.536459 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j_41e38977-61de-46c0-8a84-9cf4de5d5156/pull/0.log" Feb 17 19:10:00 crc kubenswrapper[4680]: I0217 19:10:00.692389 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j_41e38977-61de-46c0-8a84-9cf4de5d5156/util/0.log" Feb 17 19:10:00 crc kubenswrapper[4680]: I0217 19:10:00.711758 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j_41e38977-61de-46c0-8a84-9cf4de5d5156/pull/0.log" Feb 17 19:10:00 crc kubenswrapper[4680]: I0217 19:10:00.716399 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5jk7j_41e38977-61de-46c0-8a84-9cf4de5d5156/extract/0.log" Feb 17 19:10:00 crc kubenswrapper[4680]: I0217 19:10:00.925751 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-slcqr_b8d8d226-a13e-4e0c-b3b9-8fe70a25d02a/marketplace-operator/0.log" Feb 17 19:10:00 crc kubenswrapper[4680]: I0217 19:10:00.989501 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6lssp_2660862d-dbb8-465e-a47d-91dab06d81cb/extract-utilities/0.log" Feb 17 19:10:01 crc kubenswrapper[4680]: I0217 19:10:01.105705 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:10:01 crc kubenswrapper[4680]: E0217 19:10:01.106031 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:10:01 crc kubenswrapper[4680]: I0217 19:10:01.192679 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6lssp_2660862d-dbb8-465e-a47d-91dab06d81cb/extract-content/0.log" Feb 17 19:10:01 crc kubenswrapper[4680]: I0217 19:10:01.203117 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6lssp_2660862d-dbb8-465e-a47d-91dab06d81cb/extract-utilities/0.log" Feb 17 19:10:01 crc kubenswrapper[4680]: I0217 19:10:01.251396 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6lssp_2660862d-dbb8-465e-a47d-91dab06d81cb/extract-content/0.log" Feb 17 19:10:01 crc kubenswrapper[4680]: I0217 19:10:01.435339 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6lssp_2660862d-dbb8-465e-a47d-91dab06d81cb/extract-utilities/0.log" Feb 17 19:10:01 crc kubenswrapper[4680]: I0217 19:10:01.464803 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6lssp_2660862d-dbb8-465e-a47d-91dab06d81cb/extract-content/0.log" Feb 17 19:10:01 crc kubenswrapper[4680]: I0217 19:10:01.601979 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6lssp_2660862d-dbb8-465e-a47d-91dab06d81cb/registry-server/0.log" Feb 17 19:10:01 crc kubenswrapper[4680]: I0217 19:10:01.713888 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cc5x2_36277038-c2ae-486f-aa6d-ac179eb779a5/extract-utilities/0.log" Feb 17 19:10:01 crc kubenswrapper[4680]: I0217 19:10:01.945156 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cc5x2_36277038-c2ae-486f-aa6d-ac179eb779a5/extract-content/0.log" Feb 17 19:10:01 crc kubenswrapper[4680]: I0217 19:10:01.996338 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cc5x2_36277038-c2ae-486f-aa6d-ac179eb779a5/extract-content/0.log" Feb 17 19:10:02 crc kubenswrapper[4680]: I0217 19:10:02.103710 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cc5x2_36277038-c2ae-486f-aa6d-ac179eb779a5/extract-utilities/0.log" Feb 17 19:10:02 crc kubenswrapper[4680]: I0217 19:10:02.219957 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cc5x2_36277038-c2ae-486f-aa6d-ac179eb779a5/extract-utilities/0.log" Feb 17 19:10:02 crc kubenswrapper[4680]: I0217 19:10:02.261030 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cc5x2_36277038-c2ae-486f-aa6d-ac179eb779a5/extract-content/0.log" Feb 17 19:10:02 crc kubenswrapper[4680]: I0217 19:10:02.851580 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cc5x2_36277038-c2ae-486f-aa6d-ac179eb779a5/registry-server/0.log" Feb 17 19:10:13 crc kubenswrapper[4680]: I0217 19:10:13.106041 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:10:13 crc kubenswrapper[4680]: E0217 19:10:13.107194 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:10:25 crc kubenswrapper[4680]: I0217 19:10:25.105212 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:10:25 crc kubenswrapper[4680]: E0217 19:10:25.105919 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:10:39 crc kubenswrapper[4680]: I0217 19:10:39.107381 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:10:39 crc kubenswrapper[4680]: E0217 19:10:39.108084 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:10:53 crc kubenswrapper[4680]: I0217 19:10:53.108158 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:10:53 crc kubenswrapper[4680]: E0217 19:10:53.111708 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:11:07 crc kubenswrapper[4680]: I0217 19:11:07.106144 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:11:07 crc kubenswrapper[4680]: E0217 19:11:07.107079 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:11:19 crc kubenswrapper[4680]: I0217 19:11:19.105343 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:11:19 crc kubenswrapper[4680]: E0217 19:11:19.106181 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:11:33 crc kubenswrapper[4680]: I0217 19:11:33.107518 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:11:33 crc kubenswrapper[4680]: E0217 19:11:33.108458 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:11:47 crc kubenswrapper[4680]: I0217 19:11:47.106398 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:11:47 crc kubenswrapper[4680]: E0217 19:11:47.107223 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:11:57 crc kubenswrapper[4680]: I0217 19:11:57.425978 4680 scope.go:117] "RemoveContainer" containerID="830a75380aaa214d80c3e65b22bd3950b5bb4d2241c54bcf842e5ba03b345a24" Feb 17 19:11:58 crc kubenswrapper[4680]: I0217 19:11:58.105920 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:11:58 crc kubenswrapper[4680]: E0217 19:11:58.106252 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:12:13 crc kubenswrapper[4680]: I0217 19:12:13.104757 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:12:13 crc kubenswrapper[4680]: E0217 19:12:13.105563 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:12:24 crc kubenswrapper[4680]: I0217 19:12:24.106088 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:12:24 crc kubenswrapper[4680]: E0217 19:12:24.107617 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:12:27 crc kubenswrapper[4680]: I0217 19:12:27.787933 4680 generic.go:334] "Generic (PLEG): container finished" podID="3ef5c723-d0a3-4d86-90fe-e8a90ebda679" containerID="67dc94b206fbddb6ac44785f2e8eaf1bae7ca776f2af0e90c03bcd72d390b4f7" exitCode=0 Feb 17 19:12:27 crc kubenswrapper[4680]: I0217 19:12:27.788009 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rrvkc/must-gather-7gjz6" event={"ID":"3ef5c723-d0a3-4d86-90fe-e8a90ebda679","Type":"ContainerDied","Data":"67dc94b206fbddb6ac44785f2e8eaf1bae7ca776f2af0e90c03bcd72d390b4f7"} Feb 17 19:12:27 crc kubenswrapper[4680]: I0217 19:12:27.789086 4680 scope.go:117] "RemoveContainer" containerID="67dc94b206fbddb6ac44785f2e8eaf1bae7ca776f2af0e90c03bcd72d390b4f7" Feb 17 19:12:28 crc kubenswrapper[4680]: I0217 19:12:28.067424 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rrvkc_must-gather-7gjz6_3ef5c723-d0a3-4d86-90fe-e8a90ebda679/gather/0.log" Feb 17 19:12:36 crc kubenswrapper[4680]: I0217 19:12:36.105286 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:12:36 crc kubenswrapper[4680]: E0217 19:12:36.106157 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:12:41 crc kubenswrapper[4680]: I0217 19:12:41.193458 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rrvkc/must-gather-7gjz6"] Feb 17 19:12:41 crc kubenswrapper[4680]: I0217 19:12:41.194157 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rrvkc/must-gather-7gjz6" podUID="3ef5c723-d0a3-4d86-90fe-e8a90ebda679" containerName="copy" containerID="cri-o://55236760eb39d6f6be0f1c6da88328900432df36041cfb039736679995becd75" gracePeriod=2 Feb 17 19:12:41 crc kubenswrapper[4680]: I0217 19:12:41.200588 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rrvkc/must-gather-7gjz6"] Feb 17 19:12:41 crc kubenswrapper[4680]: I0217 19:12:41.668097 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rrvkc_must-gather-7gjz6_3ef5c723-d0a3-4d86-90fe-e8a90ebda679/copy/0.log" Feb 17 19:12:41 crc kubenswrapper[4680]: I0217 19:12:41.668765 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrvkc/must-gather-7gjz6" Feb 17 19:12:41 crc kubenswrapper[4680]: I0217 19:12:41.844806 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3ef5c723-d0a3-4d86-90fe-e8a90ebda679-must-gather-output\") pod \"3ef5c723-d0a3-4d86-90fe-e8a90ebda679\" (UID: \"3ef5c723-d0a3-4d86-90fe-e8a90ebda679\") " Feb 17 19:12:41 crc kubenswrapper[4680]: I0217 19:12:41.845180 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpznm\" (UniqueName: \"kubernetes.io/projected/3ef5c723-d0a3-4d86-90fe-e8a90ebda679-kube-api-access-qpznm\") pod \"3ef5c723-d0a3-4d86-90fe-e8a90ebda679\" (UID: \"3ef5c723-d0a3-4d86-90fe-e8a90ebda679\") " Feb 17 19:12:41 crc kubenswrapper[4680]: I0217 19:12:41.860618 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef5c723-d0a3-4d86-90fe-e8a90ebda679-kube-api-access-qpznm" (OuterVolumeSpecName: "kube-api-access-qpznm") pod "3ef5c723-d0a3-4d86-90fe-e8a90ebda679" (UID: "3ef5c723-d0a3-4d86-90fe-e8a90ebda679"). InnerVolumeSpecName "kube-api-access-qpznm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 19:12:41 crc kubenswrapper[4680]: I0217 19:12:41.930908 4680 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rrvkc_must-gather-7gjz6_3ef5c723-d0a3-4d86-90fe-e8a90ebda679/copy/0.log" Feb 17 19:12:41 crc kubenswrapper[4680]: I0217 19:12:41.936036 4680 generic.go:334] "Generic (PLEG): container finished" podID="3ef5c723-d0a3-4d86-90fe-e8a90ebda679" containerID="55236760eb39d6f6be0f1c6da88328900432df36041cfb039736679995becd75" exitCode=143 Feb 17 19:12:41 crc kubenswrapper[4680]: I0217 19:12:41.936194 4680 scope.go:117] "RemoveContainer" containerID="55236760eb39d6f6be0f1c6da88328900432df36041cfb039736679995becd75" Feb 17 19:12:41 crc kubenswrapper[4680]: I0217 19:12:41.936422 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rrvkc/must-gather-7gjz6" Feb 17 19:12:41 crc kubenswrapper[4680]: I0217 19:12:41.948270 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpznm\" (UniqueName: \"kubernetes.io/projected/3ef5c723-d0a3-4d86-90fe-e8a90ebda679-kube-api-access-qpznm\") on node \"crc\" DevicePath \"\"" Feb 17 19:12:41 crc kubenswrapper[4680]: I0217 19:12:41.969562 4680 scope.go:117] "RemoveContainer" containerID="67dc94b206fbddb6ac44785f2e8eaf1bae7ca776f2af0e90c03bcd72d390b4f7" Feb 17 19:12:42 crc kubenswrapper[4680]: I0217 19:12:42.094527 4680 scope.go:117] "RemoveContainer" containerID="55236760eb39d6f6be0f1c6da88328900432df36041cfb039736679995becd75" Feb 17 19:12:42 crc kubenswrapper[4680]: E0217 19:12:42.098216 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55236760eb39d6f6be0f1c6da88328900432df36041cfb039736679995becd75\": container with ID starting with 55236760eb39d6f6be0f1c6da88328900432df36041cfb039736679995becd75 not found: ID does not exist" containerID="55236760eb39d6f6be0f1c6da88328900432df36041cfb039736679995becd75" Feb 17 19:12:42 crc kubenswrapper[4680]: I0217 19:12:42.098343 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55236760eb39d6f6be0f1c6da88328900432df36041cfb039736679995becd75"} err="failed to get container status \"55236760eb39d6f6be0f1c6da88328900432df36041cfb039736679995becd75\": rpc error: code = NotFound desc = could not find container \"55236760eb39d6f6be0f1c6da88328900432df36041cfb039736679995becd75\": container with ID starting with 55236760eb39d6f6be0f1c6da88328900432df36041cfb039736679995becd75 not found: ID does not exist" Feb 17 19:12:42 crc kubenswrapper[4680]: I0217 19:12:42.098376 4680 scope.go:117] "RemoveContainer" containerID="67dc94b206fbddb6ac44785f2e8eaf1bae7ca776f2af0e90c03bcd72d390b4f7" Feb 17 19:12:42 crc kubenswrapper[4680]: E0217 19:12:42.099718 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67dc94b206fbddb6ac44785f2e8eaf1bae7ca776f2af0e90c03bcd72d390b4f7\": container with ID starting with 67dc94b206fbddb6ac44785f2e8eaf1bae7ca776f2af0e90c03bcd72d390b4f7 not found: ID does not exist" containerID="67dc94b206fbddb6ac44785f2e8eaf1bae7ca776f2af0e90c03bcd72d390b4f7" Feb 17 19:12:42 crc kubenswrapper[4680]: I0217 19:12:42.099769 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67dc94b206fbddb6ac44785f2e8eaf1bae7ca776f2af0e90c03bcd72d390b4f7"} err="failed to get container status \"67dc94b206fbddb6ac44785f2e8eaf1bae7ca776f2af0e90c03bcd72d390b4f7\": rpc error: code = NotFound desc = could not find container \"67dc94b206fbddb6ac44785f2e8eaf1bae7ca776f2af0e90c03bcd72d390b4f7\": container with ID starting with 67dc94b206fbddb6ac44785f2e8eaf1bae7ca776f2af0e90c03bcd72d390b4f7 not found: ID does not exist" Feb 17 19:12:42 crc kubenswrapper[4680]: I0217 19:12:42.160465 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ef5c723-d0a3-4d86-90fe-e8a90ebda679-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "3ef5c723-d0a3-4d86-90fe-e8a90ebda679" (UID: "3ef5c723-d0a3-4d86-90fe-e8a90ebda679"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 19:12:42 crc kubenswrapper[4680]: I0217 19:12:42.254548 4680 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3ef5c723-d0a3-4d86-90fe-e8a90ebda679-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 17 19:12:43 crc kubenswrapper[4680]: I0217 19:12:43.118354 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ef5c723-d0a3-4d86-90fe-e8a90ebda679" path="/var/lib/kubelet/pods/3ef5c723-d0a3-4d86-90fe-e8a90ebda679/volumes" Feb 17 19:12:47 crc kubenswrapper[4680]: I0217 19:12:47.105825 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:12:47 crc kubenswrapper[4680]: E0217 19:12:47.106716 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:12:57 crc kubenswrapper[4680]: I0217 19:12:57.500864 4680 scope.go:117] "RemoveContainer" containerID="56a03e147d16f396947306a93fef52265b49463bb47a483708a3a51b2b7d23b2" Feb 17 19:13:00 crc kubenswrapper[4680]: I0217 19:13:00.104906 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:13:00 crc kubenswrapper[4680]: E0217 19:13:00.105540 4680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfw56_openshift-machine-config-operator(8dd08058-016e-4607-8be6-40463674c2fe)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" Feb 17 19:13:13 crc kubenswrapper[4680]: I0217 19:13:13.105905 4680 scope.go:117] "RemoveContainer" containerID="510f9ea3cf36b25b4f055d58767be69f3b9b74d127a53a6bc4be5ef815799d22" Feb 17 19:13:14 crc kubenswrapper[4680]: I0217 19:13:14.195942 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" event={"ID":"8dd08058-016e-4607-8be6-40463674c2fe","Type":"ContainerStarted","Data":"ff7add3e785d2e504a19a078fc0316d6f051f152dd1ed9c6ab8192586b291cee"} Feb 17 19:14:01 crc kubenswrapper[4680]: I0217 19:14:01.730865 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wpvk8"] Feb 17 19:14:01 crc kubenswrapper[4680]: E0217 19:14:01.738910 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7199bf44-9fdd-4e04-849b-a0a342cd4c0d" containerName="extract-utilities" Feb 17 19:14:01 crc kubenswrapper[4680]: I0217 19:14:01.738944 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7199bf44-9fdd-4e04-849b-a0a342cd4c0d" containerName="extract-utilities" Feb 17 19:14:01 crc kubenswrapper[4680]: E0217 19:14:01.738970 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef5c723-d0a3-4d86-90fe-e8a90ebda679" containerName="gather" Feb 17 19:14:01 crc kubenswrapper[4680]: I0217 19:14:01.738979 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef5c723-d0a3-4d86-90fe-e8a90ebda679" containerName="gather" Feb 17 19:14:01 crc kubenswrapper[4680]: E0217 19:14:01.738999 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef5c723-d0a3-4d86-90fe-e8a90ebda679" containerName="copy" Feb 17 19:14:01 crc kubenswrapper[4680]: I0217 19:14:01.739007 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef5c723-d0a3-4d86-90fe-e8a90ebda679" containerName="copy" Feb 17 19:14:01 crc kubenswrapper[4680]: E0217 19:14:01.739027 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7199bf44-9fdd-4e04-849b-a0a342cd4c0d" containerName="extract-content" Feb 17 19:14:01 crc kubenswrapper[4680]: I0217 19:14:01.739037 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7199bf44-9fdd-4e04-849b-a0a342cd4c0d" containerName="extract-content" Feb 17 19:14:01 crc kubenswrapper[4680]: E0217 19:14:01.739061 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7199bf44-9fdd-4e04-849b-a0a342cd4c0d" containerName="registry-server" Feb 17 19:14:01 crc kubenswrapper[4680]: I0217 19:14:01.739069 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="7199bf44-9fdd-4e04-849b-a0a342cd4c0d" containerName="registry-server" Feb 17 19:14:01 crc kubenswrapper[4680]: I0217 19:14:01.739296 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7199bf44-9fdd-4e04-849b-a0a342cd4c0d" containerName="registry-server" Feb 17 19:14:01 crc kubenswrapper[4680]: I0217 19:14:01.739336 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef5c723-d0a3-4d86-90fe-e8a90ebda679" containerName="gather" Feb 17 19:14:01 crc kubenswrapper[4680]: I0217 19:14:01.739357 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef5c723-d0a3-4d86-90fe-e8a90ebda679" containerName="copy" Feb 17 19:14:01 crc kubenswrapper[4680]: I0217 19:14:01.741141 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wpvk8" Feb 17 19:14:01 crc kubenswrapper[4680]: I0217 19:14:01.741521 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wpvk8"] Feb 17 19:14:01 crc kubenswrapper[4680]: I0217 19:14:01.907858 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k77x\" (UniqueName: \"kubernetes.io/projected/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-kube-api-access-4k77x\") pod \"redhat-marketplace-wpvk8\" (UID: \"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4\") " pod="openshift-marketplace/redhat-marketplace-wpvk8" Feb 17 19:14:01 crc kubenswrapper[4680]: I0217 19:14:01.907988 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-utilities\") pod \"redhat-marketplace-wpvk8\" (UID: \"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4\") " pod="openshift-marketplace/redhat-marketplace-wpvk8" Feb 17 19:14:01 crc kubenswrapper[4680]: I0217 19:14:01.908045 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-catalog-content\") pod \"redhat-marketplace-wpvk8\" (UID: \"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4\") " pod="openshift-marketplace/redhat-marketplace-wpvk8" Feb 17 19:14:02 crc kubenswrapper[4680]: I0217 19:14:02.009653 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-utilities\") pod \"redhat-marketplace-wpvk8\" (UID: \"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4\") " pod="openshift-marketplace/redhat-marketplace-wpvk8" Feb 17 19:14:02 crc kubenswrapper[4680]: I0217 19:14:02.010106 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-catalog-content\") pod \"redhat-marketplace-wpvk8\" (UID: \"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4\") " pod="openshift-marketplace/redhat-marketplace-wpvk8" Feb 17 19:14:02 crc kubenswrapper[4680]: I0217 19:14:02.010271 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k77x\" (UniqueName: \"kubernetes.io/projected/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-kube-api-access-4k77x\") pod \"redhat-marketplace-wpvk8\" (UID: \"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4\") " pod="openshift-marketplace/redhat-marketplace-wpvk8" Feb 17 19:14:02 crc kubenswrapper[4680]: I0217 19:14:02.010415 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-utilities\") pod \"redhat-marketplace-wpvk8\" (UID: \"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4\") " pod="openshift-marketplace/redhat-marketplace-wpvk8" Feb 17 19:14:02 crc kubenswrapper[4680]: I0217 19:14:02.010522 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-catalog-content\") pod \"redhat-marketplace-wpvk8\" (UID: \"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4\") " pod="openshift-marketplace/redhat-marketplace-wpvk8" Feb 17 19:14:02 crc kubenswrapper[4680]: I0217 19:14:02.028815 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k77x\" (UniqueName: \"kubernetes.io/projected/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-kube-api-access-4k77x\") pod \"redhat-marketplace-wpvk8\" (UID: \"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4\") " pod="openshift-marketplace/redhat-marketplace-wpvk8" Feb 17 19:14:02 crc kubenswrapper[4680]: I0217 19:14:02.071841 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wpvk8" Feb 17 19:14:02 crc kubenswrapper[4680]: I0217 19:14:02.525307 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wpvk8"] Feb 17 19:14:02 crc kubenswrapper[4680]: W0217 19:14:02.536074 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d5ccc54_f637_4be2_bba0_b57c67c7c9c4.slice/crio-c8b9cef611902faf3c0d9a2a153da7fdd5427f1fbe059a065253f2395b3577c9 WatchSource:0}: Error finding container c8b9cef611902faf3c0d9a2a153da7fdd5427f1fbe059a065253f2395b3577c9: Status 404 returned error can't find the container with id c8b9cef611902faf3c0d9a2a153da7fdd5427f1fbe059a065253f2395b3577c9 Feb 17 19:14:02 crc kubenswrapper[4680]: I0217 19:14:02.635940 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpvk8" event={"ID":"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4","Type":"ContainerStarted","Data":"c8b9cef611902faf3c0d9a2a153da7fdd5427f1fbe059a065253f2395b3577c9"} Feb 17 19:14:03 crc kubenswrapper[4680]: I0217 19:14:03.647154 4680 generic.go:334] "Generic (PLEG): container finished" podID="8d5ccc54-f637-4be2-bba0-b57c67c7c9c4" containerID="d3893cc0a1b0243db3eda5851f263c7ed8225dcfa4d62a285ccc4e2b448502a4" exitCode=0 Feb 17 19:14:03 crc kubenswrapper[4680]: I0217 19:14:03.647240 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpvk8" event={"ID":"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4","Type":"ContainerDied","Data":"d3893cc0a1b0243db3eda5851f263c7ed8225dcfa4d62a285ccc4e2b448502a4"} Feb 17 19:14:03 crc kubenswrapper[4680]: I0217 19:14:03.649840 4680 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 19:14:05 crc kubenswrapper[4680]: I0217 19:14:05.667146 4680 generic.go:334] "Generic (PLEG): container finished" podID="8d5ccc54-f637-4be2-bba0-b57c67c7c9c4" containerID="9f243e5d59d1f8f345c240c99580b98952dafa120ee1d2465a17a792b6fbce3d" exitCode=0 Feb 17 19:14:05 crc kubenswrapper[4680]: I0217 19:14:05.667198 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpvk8" event={"ID":"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4","Type":"ContainerDied","Data":"9f243e5d59d1f8f345c240c99580b98952dafa120ee1d2465a17a792b6fbce3d"} Feb 17 19:14:06 crc kubenswrapper[4680]: I0217 19:14:06.681933 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpvk8" event={"ID":"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4","Type":"ContainerStarted","Data":"e18dfe01d87b3c656f08c5ee945d4b735c5738f2911715d74381ab9fcd32de80"} Feb 17 19:14:06 crc kubenswrapper[4680]: I0217 19:14:06.710342 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wpvk8" podStartSLOduration=3.134503128 podStartE2EDuration="5.710310082s" podCreationTimestamp="2026-02-17 19:14:01 +0000 UTC" firstStartedPulling="2026-02-17 19:14:03.649352104 +0000 UTC m=+5373.200250813" lastFinishedPulling="2026-02-17 19:14:06.225159088 +0000 UTC m=+5375.776057767" observedRunningTime="2026-02-17 19:14:06.703680265 +0000 UTC m=+5376.254578944" watchObservedRunningTime="2026-02-17 19:14:06.710310082 +0000 UTC m=+5376.261208761" Feb 17 19:14:12 crc kubenswrapper[4680]: I0217 19:14:12.072748 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wpvk8" Feb 17 19:14:12 crc kubenswrapper[4680]: I0217 19:14:12.073350 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wpvk8" Feb 17 19:14:12 crc kubenswrapper[4680]: I0217 19:14:12.136843 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wpvk8" Feb 17 19:14:12 crc kubenswrapper[4680]: I0217 19:14:12.779803 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wpvk8" Feb 17 19:14:13 crc kubenswrapper[4680]: I0217 19:14:13.717232 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wpvk8"] Feb 17 19:14:14 crc kubenswrapper[4680]: I0217 19:14:14.742625 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wpvk8" podUID="8d5ccc54-f637-4be2-bba0-b57c67c7c9c4" containerName="registry-server" containerID="cri-o://e18dfe01d87b3c656f08c5ee945d4b735c5738f2911715d74381ab9fcd32de80" gracePeriod=2 Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.222410 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wpvk8" Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.275382 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-utilities\") pod \"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4\" (UID: \"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4\") " Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.275762 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-catalog-content\") pod \"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4\" (UID: \"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4\") " Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.275795 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k77x\" (UniqueName: \"kubernetes.io/projected/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-kube-api-access-4k77x\") pod \"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4\" (UID: \"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4\") " Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.277808 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-utilities" (OuterVolumeSpecName: "utilities") pod "8d5ccc54-f637-4be2-bba0-b57c67c7c9c4" (UID: "8d5ccc54-f637-4be2-bba0-b57c67c7c9c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.286608 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-kube-api-access-4k77x" (OuterVolumeSpecName: "kube-api-access-4k77x") pod "8d5ccc54-f637-4be2-bba0-b57c67c7c9c4" (UID: "8d5ccc54-f637-4be2-bba0-b57c67c7c9c4"). InnerVolumeSpecName "kube-api-access-4k77x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.305563 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d5ccc54-f637-4be2-bba0-b57c67c7c9c4" (UID: "8d5ccc54-f637-4be2-bba0-b57c67c7c9c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.377724 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.377755 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k77x\" (UniqueName: \"kubernetes.io/projected/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-kube-api-access-4k77x\") on node \"crc\" DevicePath \"\"" Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.377769 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.752285 4680 generic.go:334] "Generic (PLEG): container finished" podID="8d5ccc54-f637-4be2-bba0-b57c67c7c9c4" containerID="e18dfe01d87b3c656f08c5ee945d4b735c5738f2911715d74381ab9fcd32de80" exitCode=0 Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.752343 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpvk8" event={"ID":"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4","Type":"ContainerDied","Data":"e18dfe01d87b3c656f08c5ee945d4b735c5738f2911715d74381ab9fcd32de80"} Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.752305 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wpvk8" Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.752381 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpvk8" event={"ID":"8d5ccc54-f637-4be2-bba0-b57c67c7c9c4","Type":"ContainerDied","Data":"c8b9cef611902faf3c0d9a2a153da7fdd5427f1fbe059a065253f2395b3577c9"} Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.752399 4680 scope.go:117] "RemoveContainer" containerID="e18dfe01d87b3c656f08c5ee945d4b735c5738f2911715d74381ab9fcd32de80" Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.780437 4680 scope.go:117] "RemoveContainer" containerID="9f243e5d59d1f8f345c240c99580b98952dafa120ee1d2465a17a792b6fbce3d" Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.797083 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wpvk8"] Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.804986 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wpvk8"] Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.822121 4680 scope.go:117] "RemoveContainer" containerID="d3893cc0a1b0243db3eda5851f263c7ed8225dcfa4d62a285ccc4e2b448502a4" Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.870840 4680 scope.go:117] "RemoveContainer" containerID="e18dfe01d87b3c656f08c5ee945d4b735c5738f2911715d74381ab9fcd32de80" Feb 17 19:14:15 crc kubenswrapper[4680]: E0217 19:14:15.871367 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e18dfe01d87b3c656f08c5ee945d4b735c5738f2911715d74381ab9fcd32de80\": container with ID starting with e18dfe01d87b3c656f08c5ee945d4b735c5738f2911715d74381ab9fcd32de80 not found: ID does not exist" containerID="e18dfe01d87b3c656f08c5ee945d4b735c5738f2911715d74381ab9fcd32de80" Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.871406 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e18dfe01d87b3c656f08c5ee945d4b735c5738f2911715d74381ab9fcd32de80"} err="failed to get container status \"e18dfe01d87b3c656f08c5ee945d4b735c5738f2911715d74381ab9fcd32de80\": rpc error: code = NotFound desc = could not find container \"e18dfe01d87b3c656f08c5ee945d4b735c5738f2911715d74381ab9fcd32de80\": container with ID starting with e18dfe01d87b3c656f08c5ee945d4b735c5738f2911715d74381ab9fcd32de80 not found: ID does not exist" Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.871429 4680 scope.go:117] "RemoveContainer" containerID="9f243e5d59d1f8f345c240c99580b98952dafa120ee1d2465a17a792b6fbce3d" Feb 17 19:14:15 crc kubenswrapper[4680]: E0217 19:14:15.871782 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f243e5d59d1f8f345c240c99580b98952dafa120ee1d2465a17a792b6fbce3d\": container with ID starting with 9f243e5d59d1f8f345c240c99580b98952dafa120ee1d2465a17a792b6fbce3d not found: ID does not exist" containerID="9f243e5d59d1f8f345c240c99580b98952dafa120ee1d2465a17a792b6fbce3d" Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.871808 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f243e5d59d1f8f345c240c99580b98952dafa120ee1d2465a17a792b6fbce3d"} err="failed to get container status \"9f243e5d59d1f8f345c240c99580b98952dafa120ee1d2465a17a792b6fbce3d\": rpc error: code = NotFound desc = could not find container \"9f243e5d59d1f8f345c240c99580b98952dafa120ee1d2465a17a792b6fbce3d\": container with ID starting with 9f243e5d59d1f8f345c240c99580b98952dafa120ee1d2465a17a792b6fbce3d not found: ID does not exist" Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.871821 4680 scope.go:117] "RemoveContainer" containerID="d3893cc0a1b0243db3eda5851f263c7ed8225dcfa4d62a285ccc4e2b448502a4" Feb 17 19:14:15 crc kubenswrapper[4680]: E0217 19:14:15.872116 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3893cc0a1b0243db3eda5851f263c7ed8225dcfa4d62a285ccc4e2b448502a4\": container with ID starting with d3893cc0a1b0243db3eda5851f263c7ed8225dcfa4d62a285ccc4e2b448502a4 not found: ID does not exist" containerID="d3893cc0a1b0243db3eda5851f263c7ed8225dcfa4d62a285ccc4e2b448502a4" Feb 17 19:14:15 crc kubenswrapper[4680]: I0217 19:14:15.872135 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3893cc0a1b0243db3eda5851f263c7ed8225dcfa4d62a285ccc4e2b448502a4"} err="failed to get container status \"d3893cc0a1b0243db3eda5851f263c7ed8225dcfa4d62a285ccc4e2b448502a4\": rpc error: code = NotFound desc = could not find container \"d3893cc0a1b0243db3eda5851f263c7ed8225dcfa4d62a285ccc4e2b448502a4\": container with ID starting with d3893cc0a1b0243db3eda5851f263c7ed8225dcfa4d62a285ccc4e2b448502a4 not found: ID does not exist" Feb 17 19:14:17 crc kubenswrapper[4680]: I0217 19:14:17.116930 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d5ccc54-f637-4be2-bba0-b57c67c7c9c4" path="/var/lib/kubelet/pods/8d5ccc54-f637-4be2-bba0-b57c67c7c9c4/volumes" Feb 17 19:14:52 crc kubenswrapper[4680]: I0217 19:14:52.554063 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q7ljm"] Feb 17 19:14:52 crc kubenswrapper[4680]: E0217 19:14:52.555100 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d5ccc54-f637-4be2-bba0-b57c67c7c9c4" containerName="extract-content" Feb 17 19:14:52 crc kubenswrapper[4680]: I0217 19:14:52.555116 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d5ccc54-f637-4be2-bba0-b57c67c7c9c4" containerName="extract-content" Feb 17 19:14:52 crc kubenswrapper[4680]: E0217 19:14:52.555142 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d5ccc54-f637-4be2-bba0-b57c67c7c9c4" containerName="registry-server" Feb 17 19:14:52 crc kubenswrapper[4680]: I0217 19:14:52.555150 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d5ccc54-f637-4be2-bba0-b57c67c7c9c4" containerName="registry-server" Feb 17 19:14:52 crc kubenswrapper[4680]: E0217 19:14:52.555170 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d5ccc54-f637-4be2-bba0-b57c67c7c9c4" containerName="extract-utilities" Feb 17 19:14:52 crc kubenswrapper[4680]: I0217 19:14:52.555178 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d5ccc54-f637-4be2-bba0-b57c67c7c9c4" containerName="extract-utilities" Feb 17 19:14:52 crc kubenswrapper[4680]: I0217 19:14:52.555433 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d5ccc54-f637-4be2-bba0-b57c67c7c9c4" containerName="registry-server" Feb 17 19:14:52 crc kubenswrapper[4680]: I0217 19:14:52.557022 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7ljm" Feb 17 19:14:52 crc kubenswrapper[4680]: I0217 19:14:52.569089 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7ljm"] Feb 17 19:14:52 crc kubenswrapper[4680]: I0217 19:14:52.704254 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-catalog-content\") pod \"redhat-operators-q7ljm\" (UID: \"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918\") " pod="openshift-marketplace/redhat-operators-q7ljm" Feb 17 19:14:52 crc kubenswrapper[4680]: I0217 19:14:52.704406 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-utilities\") pod \"redhat-operators-q7ljm\" (UID: \"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918\") " pod="openshift-marketplace/redhat-operators-q7ljm" Feb 17 19:14:52 crc kubenswrapper[4680]: I0217 19:14:52.704434 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr7kv\" (UniqueName: \"kubernetes.io/projected/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-kube-api-access-wr7kv\") pod \"redhat-operators-q7ljm\" (UID: \"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918\") " pod="openshift-marketplace/redhat-operators-q7ljm" Feb 17 19:14:52 crc kubenswrapper[4680]: I0217 19:14:52.806494 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-utilities\") pod \"redhat-operators-q7ljm\" (UID: \"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918\") " pod="openshift-marketplace/redhat-operators-q7ljm" Feb 17 19:14:52 crc kubenswrapper[4680]: I0217 19:14:52.806847 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr7kv\" (UniqueName: \"kubernetes.io/projected/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-kube-api-access-wr7kv\") pod \"redhat-operators-q7ljm\" (UID: \"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918\") " pod="openshift-marketplace/redhat-operators-q7ljm" Feb 17 19:14:52 crc kubenswrapper[4680]: I0217 19:14:52.807056 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-utilities\") pod \"redhat-operators-q7ljm\" (UID: \"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918\") " pod="openshift-marketplace/redhat-operators-q7ljm" Feb 17 19:14:52 crc kubenswrapper[4680]: I0217 19:14:52.807066 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-catalog-content\") pod \"redhat-operators-q7ljm\" (UID: \"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918\") " pod="openshift-marketplace/redhat-operators-q7ljm" Feb 17 19:14:52 crc kubenswrapper[4680]: I0217 19:14:52.807458 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-catalog-content\") pod \"redhat-operators-q7ljm\" (UID: \"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918\") " pod="openshift-marketplace/redhat-operators-q7ljm" Feb 17 19:14:52 crc kubenswrapper[4680]: I0217 19:14:52.837335 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr7kv\" (UniqueName: \"kubernetes.io/projected/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-kube-api-access-wr7kv\") pod \"redhat-operators-q7ljm\" (UID: \"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918\") " pod="openshift-marketplace/redhat-operators-q7ljm" Feb 17 19:14:52 crc kubenswrapper[4680]: I0217 19:14:52.882382 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7ljm" Feb 17 19:14:53 crc kubenswrapper[4680]: I0217 19:14:53.194807 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7ljm"] Feb 17 19:14:54 crc kubenswrapper[4680]: I0217 19:14:54.068524 4680 generic.go:334] "Generic (PLEG): container finished" podID="5e43e74c-d885-4cfd-a8e0-55ed3f4f7918" containerID="412d97438ab8040f3327a2ee750020fde81f3ffd00737de07a270438f4c51a7e" exitCode=0 Feb 17 19:14:54 crc kubenswrapper[4680]: I0217 19:14:54.068583 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7ljm" event={"ID":"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918","Type":"ContainerDied","Data":"412d97438ab8040f3327a2ee750020fde81f3ffd00737de07a270438f4c51a7e"} Feb 17 19:14:54 crc kubenswrapper[4680]: I0217 19:14:54.068770 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7ljm" event={"ID":"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918","Type":"ContainerStarted","Data":"5e2bee0ef57e450570c701d3035f6c9ca5fe5fb50c0c7f4bf29b1334b0604b44"} Feb 17 19:14:55 crc kubenswrapper[4680]: I0217 19:14:55.084365 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7ljm" event={"ID":"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918","Type":"ContainerStarted","Data":"384b112e5e461647c721a17df209efbb832a966d5a8822dfef1c6eb8b77b90a5"} Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.138628 4680 generic.go:334] "Generic (PLEG): container finished" podID="5e43e74c-d885-4cfd-a8e0-55ed3f4f7918" containerID="384b112e5e461647c721a17df209efbb832a966d5a8822dfef1c6eb8b77b90a5" exitCode=0 Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.138758 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7ljm" event={"ID":"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918","Type":"ContainerDied","Data":"384b112e5e461647c721a17df209efbb832a966d5a8822dfef1c6eb8b77b90a5"} Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.146343 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn"] Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.148026 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.151263 4680 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.168782 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn"] Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.172971 4680 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.258384 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bprhp\" (UniqueName: \"kubernetes.io/projected/1d328f50-8892-45d4-8ba4-e4227f408d80-kube-api-access-bprhp\") pod \"collect-profiles-29522595-s4dqn\" (UID: \"1d328f50-8892-45d4-8ba4-e4227f408d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.258490 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d328f50-8892-45d4-8ba4-e4227f408d80-secret-volume\") pod \"collect-profiles-29522595-s4dqn\" (UID: \"1d328f50-8892-45d4-8ba4-e4227f408d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.258538 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d328f50-8892-45d4-8ba4-e4227f408d80-config-volume\") pod \"collect-profiles-29522595-s4dqn\" (UID: \"1d328f50-8892-45d4-8ba4-e4227f408d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.360864 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d328f50-8892-45d4-8ba4-e4227f408d80-secret-volume\") pod \"collect-profiles-29522595-s4dqn\" (UID: \"1d328f50-8892-45d4-8ba4-e4227f408d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.360942 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d328f50-8892-45d4-8ba4-e4227f408d80-config-volume\") pod \"collect-profiles-29522595-s4dqn\" (UID: \"1d328f50-8892-45d4-8ba4-e4227f408d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.361034 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bprhp\" (UniqueName: \"kubernetes.io/projected/1d328f50-8892-45d4-8ba4-e4227f408d80-kube-api-access-bprhp\") pod \"collect-profiles-29522595-s4dqn\" (UID: \"1d328f50-8892-45d4-8ba4-e4227f408d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.361908 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d328f50-8892-45d4-8ba4-e4227f408d80-config-volume\") pod \"collect-profiles-29522595-s4dqn\" (UID: \"1d328f50-8892-45d4-8ba4-e4227f408d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.367533 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d328f50-8892-45d4-8ba4-e4227f408d80-secret-volume\") pod \"collect-profiles-29522595-s4dqn\" (UID: \"1d328f50-8892-45d4-8ba4-e4227f408d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.376066 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bprhp\" (UniqueName: \"kubernetes.io/projected/1d328f50-8892-45d4-8ba4-e4227f408d80-kube-api-access-bprhp\") pod \"collect-profiles-29522595-s4dqn\" (UID: \"1d328f50-8892-45d4-8ba4-e4227f408d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.488160 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" Feb 17 19:15:00 crc kubenswrapper[4680]: I0217 19:15:00.968629 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn"] Feb 17 19:15:00 crc kubenswrapper[4680]: W0217 19:15:00.972742 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d328f50_8892_45d4_8ba4_e4227f408d80.slice/crio-01f90111593f0b27657086a69062246234b3d3a2703d6087599a9d0a5184974a WatchSource:0}: Error finding container 01f90111593f0b27657086a69062246234b3d3a2703d6087599a9d0a5184974a: Status 404 returned error can't find the container with id 01f90111593f0b27657086a69062246234b3d3a2703d6087599a9d0a5184974a Feb 17 19:15:01 crc kubenswrapper[4680]: I0217 19:15:01.148161 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7ljm" event={"ID":"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918","Type":"ContainerStarted","Data":"6f1557af98667fe707f325e6814fa4e4851ece9f67122dbd51ca12d2780fcbb7"} Feb 17 19:15:01 crc kubenswrapper[4680]: I0217 19:15:01.149305 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" event={"ID":"1d328f50-8892-45d4-8ba4-e4227f408d80","Type":"ContainerStarted","Data":"4749451825f8cf7bd2d41de11083a42b049e0e86bcf5f688386f162996d62e04"} Feb 17 19:15:01 crc kubenswrapper[4680]: I0217 19:15:01.149375 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" event={"ID":"1d328f50-8892-45d4-8ba4-e4227f408d80","Type":"ContainerStarted","Data":"01f90111593f0b27657086a69062246234b3d3a2703d6087599a9d0a5184974a"} Feb 17 19:15:01 crc kubenswrapper[4680]: I0217 19:15:01.173915 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q7ljm" podStartSLOduration=2.72213268 podStartE2EDuration="9.173898217s" podCreationTimestamp="2026-02-17 19:14:52 +0000 UTC" firstStartedPulling="2026-02-17 19:14:54.071156616 +0000 UTC m=+5423.622055295" lastFinishedPulling="2026-02-17 19:15:00.522922153 +0000 UTC m=+5430.073820832" observedRunningTime="2026-02-17 19:15:01.169827239 +0000 UTC m=+5430.720725918" watchObservedRunningTime="2026-02-17 19:15:01.173898217 +0000 UTC m=+5430.724796896" Feb 17 19:15:01 crc kubenswrapper[4680]: I0217 19:15:01.192272 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" podStartSLOduration=1.19225701 podStartE2EDuration="1.19225701s" podCreationTimestamp="2026-02-17 19:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 19:15:01.186719427 +0000 UTC m=+5430.737618106" watchObservedRunningTime="2026-02-17 19:15:01.19225701 +0000 UTC m=+5430.743155689" Feb 17 19:15:02 crc kubenswrapper[4680]: I0217 19:15:02.158982 4680 generic.go:334] "Generic (PLEG): container finished" podID="1d328f50-8892-45d4-8ba4-e4227f408d80" containerID="4749451825f8cf7bd2d41de11083a42b049e0e86bcf5f688386f162996d62e04" exitCode=0 Feb 17 19:15:02 crc kubenswrapper[4680]: I0217 19:15:02.159080 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" event={"ID":"1d328f50-8892-45d4-8ba4-e4227f408d80","Type":"ContainerDied","Data":"4749451825f8cf7bd2d41de11083a42b049e0e86bcf5f688386f162996d62e04"} Feb 17 19:15:02 crc kubenswrapper[4680]: I0217 19:15:02.883807 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q7ljm" Feb 17 19:15:02 crc kubenswrapper[4680]: I0217 19:15:02.884070 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q7ljm" Feb 17 19:15:03 crc kubenswrapper[4680]: I0217 19:15:03.574550 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" Feb 17 19:15:03 crc kubenswrapper[4680]: I0217 19:15:03.755396 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d328f50-8892-45d4-8ba4-e4227f408d80-secret-volume\") pod \"1d328f50-8892-45d4-8ba4-e4227f408d80\" (UID: \"1d328f50-8892-45d4-8ba4-e4227f408d80\") " Feb 17 19:15:03 crc kubenswrapper[4680]: I0217 19:15:03.755451 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d328f50-8892-45d4-8ba4-e4227f408d80-config-volume\") pod \"1d328f50-8892-45d4-8ba4-e4227f408d80\" (UID: \"1d328f50-8892-45d4-8ba4-e4227f408d80\") " Feb 17 19:15:03 crc kubenswrapper[4680]: I0217 19:15:03.755621 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bprhp\" (UniqueName: \"kubernetes.io/projected/1d328f50-8892-45d4-8ba4-e4227f408d80-kube-api-access-bprhp\") pod \"1d328f50-8892-45d4-8ba4-e4227f408d80\" (UID: \"1d328f50-8892-45d4-8ba4-e4227f408d80\") " Feb 17 19:15:03 crc kubenswrapper[4680]: I0217 19:15:03.757136 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d328f50-8892-45d4-8ba4-e4227f408d80-config-volume" (OuterVolumeSpecName: "config-volume") pod "1d328f50-8892-45d4-8ba4-e4227f408d80" (UID: "1d328f50-8892-45d4-8ba4-e4227f408d80"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 19:15:03 crc kubenswrapper[4680]: I0217 19:15:03.766723 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d328f50-8892-45d4-8ba4-e4227f408d80-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1d328f50-8892-45d4-8ba4-e4227f408d80" (UID: "1d328f50-8892-45d4-8ba4-e4227f408d80"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 19:15:03 crc kubenswrapper[4680]: I0217 19:15:03.774557 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d328f50-8892-45d4-8ba4-e4227f408d80-kube-api-access-bprhp" (OuterVolumeSpecName: "kube-api-access-bprhp") pod "1d328f50-8892-45d4-8ba4-e4227f408d80" (UID: "1d328f50-8892-45d4-8ba4-e4227f408d80"). InnerVolumeSpecName "kube-api-access-bprhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 19:15:03 crc kubenswrapper[4680]: I0217 19:15:03.861614 4680 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d328f50-8892-45d4-8ba4-e4227f408d80-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 19:15:03 crc kubenswrapper[4680]: I0217 19:15:03.861647 4680 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d328f50-8892-45d4-8ba4-e4227f408d80-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 19:15:03 crc kubenswrapper[4680]: I0217 19:15:03.861657 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bprhp\" (UniqueName: \"kubernetes.io/projected/1d328f50-8892-45d4-8ba4-e4227f408d80-kube-api-access-bprhp\") on node \"crc\" DevicePath \"\"" Feb 17 19:15:03 crc kubenswrapper[4680]: I0217 19:15:03.924406 4680 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q7ljm" podUID="5e43e74c-d885-4cfd-a8e0-55ed3f4f7918" containerName="registry-server" probeResult="failure" output=< Feb 17 19:15:03 crc kubenswrapper[4680]: timeout: failed to connect service ":50051" within 1s Feb 17 19:15:03 crc kubenswrapper[4680]: > Feb 17 19:15:04 crc kubenswrapper[4680]: I0217 19:15:04.178850 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" event={"ID":"1d328f50-8892-45d4-8ba4-e4227f408d80","Type":"ContainerDied","Data":"01f90111593f0b27657086a69062246234b3d3a2703d6087599a9d0a5184974a"} Feb 17 19:15:04 crc kubenswrapper[4680]: I0217 19:15:04.178889 4680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01f90111593f0b27657086a69062246234b3d3a2703d6087599a9d0a5184974a" Feb 17 19:15:04 crc kubenswrapper[4680]: I0217 19:15:04.178892 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522595-s4dqn" Feb 17 19:15:04 crc kubenswrapper[4680]: I0217 19:15:04.645476 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr"] Feb 17 19:15:04 crc kubenswrapper[4680]: I0217 19:15:04.653465 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522550-xxwbr"] Feb 17 19:15:05 crc kubenswrapper[4680]: I0217 19:15:05.116764 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b06e1201-c49f-4cd4-a794-5beecc1223af" path="/var/lib/kubelet/pods/b06e1201-c49f-4cd4-a794-5beecc1223af/volumes" Feb 17 19:15:12 crc kubenswrapper[4680]: I0217 19:15:12.936736 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q7ljm" Feb 17 19:15:12 crc kubenswrapper[4680]: I0217 19:15:12.984047 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q7ljm" Feb 17 19:15:13 crc kubenswrapper[4680]: I0217 19:15:13.177265 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7ljm"] Feb 17 19:15:14 crc kubenswrapper[4680]: I0217 19:15:14.254907 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q7ljm" podUID="5e43e74c-d885-4cfd-a8e0-55ed3f4f7918" containerName="registry-server" containerID="cri-o://6f1557af98667fe707f325e6814fa4e4851ece9f67122dbd51ca12d2780fcbb7" gracePeriod=2 Feb 17 19:15:14 crc kubenswrapper[4680]: I0217 19:15:14.671024 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7ljm" Feb 17 19:15:14 crc kubenswrapper[4680]: I0217 19:15:14.766074 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr7kv\" (UniqueName: \"kubernetes.io/projected/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-kube-api-access-wr7kv\") pod \"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918\" (UID: \"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918\") " Feb 17 19:15:14 crc kubenswrapper[4680]: I0217 19:15:14.766217 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-utilities\") pod \"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918\" (UID: \"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918\") " Feb 17 19:15:14 crc kubenswrapper[4680]: I0217 19:15:14.766287 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-catalog-content\") pod \"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918\" (UID: \"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918\") " Feb 17 19:15:14 crc kubenswrapper[4680]: I0217 19:15:14.766926 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-utilities" (OuterVolumeSpecName: "utilities") pod "5e43e74c-d885-4cfd-a8e0-55ed3f4f7918" (UID: "5e43e74c-d885-4cfd-a8e0-55ed3f4f7918"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 19:15:14 crc kubenswrapper[4680]: I0217 19:15:14.771999 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-kube-api-access-wr7kv" (OuterVolumeSpecName: "kube-api-access-wr7kv") pod "5e43e74c-d885-4cfd-a8e0-55ed3f4f7918" (UID: "5e43e74c-d885-4cfd-a8e0-55ed3f4f7918"). InnerVolumeSpecName "kube-api-access-wr7kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 19:15:14 crc kubenswrapper[4680]: I0217 19:15:14.868392 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr7kv\" (UniqueName: \"kubernetes.io/projected/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-kube-api-access-wr7kv\") on node \"crc\" DevicePath \"\"" Feb 17 19:15:14 crc kubenswrapper[4680]: I0217 19:15:14.868434 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 19:15:14 crc kubenswrapper[4680]: I0217 19:15:14.909247 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e43e74c-d885-4cfd-a8e0-55ed3f4f7918" (UID: "5e43e74c-d885-4cfd-a8e0-55ed3f4f7918"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 19:15:14 crc kubenswrapper[4680]: I0217 19:15:14.970409 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 19:15:15 crc kubenswrapper[4680]: I0217 19:15:15.267031 4680 generic.go:334] "Generic (PLEG): container finished" podID="5e43e74c-d885-4cfd-a8e0-55ed3f4f7918" containerID="6f1557af98667fe707f325e6814fa4e4851ece9f67122dbd51ca12d2780fcbb7" exitCode=0 Feb 17 19:15:15 crc kubenswrapper[4680]: I0217 19:15:15.267072 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7ljm" event={"ID":"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918","Type":"ContainerDied","Data":"6f1557af98667fe707f325e6814fa4e4851ece9f67122dbd51ca12d2780fcbb7"} Feb 17 19:15:15 crc kubenswrapper[4680]: I0217 19:15:15.267101 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7ljm" event={"ID":"5e43e74c-d885-4cfd-a8e0-55ed3f4f7918","Type":"ContainerDied","Data":"5e2bee0ef57e450570c701d3035f6c9ca5fe5fb50c0c7f4bf29b1334b0604b44"} Feb 17 19:15:15 crc kubenswrapper[4680]: I0217 19:15:15.267117 4680 scope.go:117] "RemoveContainer" containerID="6f1557af98667fe707f325e6814fa4e4851ece9f67122dbd51ca12d2780fcbb7" Feb 17 19:15:15 crc kubenswrapper[4680]: I0217 19:15:15.268306 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7ljm" Feb 17 19:15:15 crc kubenswrapper[4680]: I0217 19:15:15.298156 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7ljm"] Feb 17 19:15:15 crc kubenswrapper[4680]: I0217 19:15:15.305280 4680 scope.go:117] "RemoveContainer" containerID="384b112e5e461647c721a17df209efbb832a966d5a8822dfef1c6eb8b77b90a5" Feb 17 19:15:15 crc kubenswrapper[4680]: I0217 19:15:15.311748 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q7ljm"] Feb 17 19:15:15 crc kubenswrapper[4680]: I0217 19:15:15.332118 4680 scope.go:117] "RemoveContainer" containerID="412d97438ab8040f3327a2ee750020fde81f3ffd00737de07a270438f4c51a7e" Feb 17 19:15:15 crc kubenswrapper[4680]: I0217 19:15:15.370030 4680 scope.go:117] "RemoveContainer" containerID="6f1557af98667fe707f325e6814fa4e4851ece9f67122dbd51ca12d2780fcbb7" Feb 17 19:15:15 crc kubenswrapper[4680]: E0217 19:15:15.370504 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f1557af98667fe707f325e6814fa4e4851ece9f67122dbd51ca12d2780fcbb7\": container with ID starting with 6f1557af98667fe707f325e6814fa4e4851ece9f67122dbd51ca12d2780fcbb7 not found: ID does not exist" containerID="6f1557af98667fe707f325e6814fa4e4851ece9f67122dbd51ca12d2780fcbb7" Feb 17 19:15:15 crc kubenswrapper[4680]: I0217 19:15:15.370546 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1557af98667fe707f325e6814fa4e4851ece9f67122dbd51ca12d2780fcbb7"} err="failed to get container status \"6f1557af98667fe707f325e6814fa4e4851ece9f67122dbd51ca12d2780fcbb7\": rpc error: code = NotFound desc = could not find container \"6f1557af98667fe707f325e6814fa4e4851ece9f67122dbd51ca12d2780fcbb7\": container with ID starting with 6f1557af98667fe707f325e6814fa4e4851ece9f67122dbd51ca12d2780fcbb7 not found: ID does not exist" Feb 17 19:15:15 crc kubenswrapper[4680]: I0217 19:15:15.370566 4680 scope.go:117] "RemoveContainer" containerID="384b112e5e461647c721a17df209efbb832a966d5a8822dfef1c6eb8b77b90a5" Feb 17 19:15:15 crc kubenswrapper[4680]: E0217 19:15:15.370822 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"384b112e5e461647c721a17df209efbb832a966d5a8822dfef1c6eb8b77b90a5\": container with ID starting with 384b112e5e461647c721a17df209efbb832a966d5a8822dfef1c6eb8b77b90a5 not found: ID does not exist" containerID="384b112e5e461647c721a17df209efbb832a966d5a8822dfef1c6eb8b77b90a5" Feb 17 19:15:15 crc kubenswrapper[4680]: I0217 19:15:15.370841 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"384b112e5e461647c721a17df209efbb832a966d5a8822dfef1c6eb8b77b90a5"} err="failed to get container status \"384b112e5e461647c721a17df209efbb832a966d5a8822dfef1c6eb8b77b90a5\": rpc error: code = NotFound desc = could not find container \"384b112e5e461647c721a17df209efbb832a966d5a8822dfef1c6eb8b77b90a5\": container with ID starting with 384b112e5e461647c721a17df209efbb832a966d5a8822dfef1c6eb8b77b90a5 not found: ID does not exist" Feb 17 19:15:15 crc kubenswrapper[4680]: I0217 19:15:15.370852 4680 scope.go:117] "RemoveContainer" containerID="412d97438ab8040f3327a2ee750020fde81f3ffd00737de07a270438f4c51a7e" Feb 17 19:15:15 crc kubenswrapper[4680]: E0217 19:15:15.371095 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"412d97438ab8040f3327a2ee750020fde81f3ffd00737de07a270438f4c51a7e\": container with ID starting with 412d97438ab8040f3327a2ee750020fde81f3ffd00737de07a270438f4c51a7e not found: ID does not exist" containerID="412d97438ab8040f3327a2ee750020fde81f3ffd00737de07a270438f4c51a7e" Feb 17 19:15:15 crc kubenswrapper[4680]: I0217 19:15:15.371121 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"412d97438ab8040f3327a2ee750020fde81f3ffd00737de07a270438f4c51a7e"} err="failed to get container status \"412d97438ab8040f3327a2ee750020fde81f3ffd00737de07a270438f4c51a7e\": rpc error: code = NotFound desc = could not find container \"412d97438ab8040f3327a2ee750020fde81f3ffd00737de07a270438f4c51a7e\": container with ID starting with 412d97438ab8040f3327a2ee750020fde81f3ffd00737de07a270438f4c51a7e not found: ID does not exist" Feb 17 19:15:17 crc kubenswrapper[4680]: I0217 19:15:17.127273 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e43e74c-d885-4cfd-a8e0-55ed3f4f7918" path="/var/lib/kubelet/pods/5e43e74c-d885-4cfd-a8e0-55ed3f4f7918/volumes" Feb 17 19:15:35 crc kubenswrapper[4680]: I0217 19:15:35.588480 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 19:15:35 crc kubenswrapper[4680]: I0217 19:15:35.589042 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.305897 4680 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5rmjq"] Feb 17 19:15:47 crc kubenswrapper[4680]: E0217 19:15:47.307144 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e43e74c-d885-4cfd-a8e0-55ed3f4f7918" containerName="registry-server" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.307163 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e43e74c-d885-4cfd-a8e0-55ed3f4f7918" containerName="registry-server" Feb 17 19:15:47 crc kubenswrapper[4680]: E0217 19:15:47.307184 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e43e74c-d885-4cfd-a8e0-55ed3f4f7918" containerName="extract-content" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.307192 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e43e74c-d885-4cfd-a8e0-55ed3f4f7918" containerName="extract-content" Feb 17 19:15:47 crc kubenswrapper[4680]: E0217 19:15:47.307212 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e43e74c-d885-4cfd-a8e0-55ed3f4f7918" containerName="extract-utilities" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.307220 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e43e74c-d885-4cfd-a8e0-55ed3f4f7918" containerName="extract-utilities" Feb 17 19:15:47 crc kubenswrapper[4680]: E0217 19:15:47.307251 4680 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d328f50-8892-45d4-8ba4-e4227f408d80" containerName="collect-profiles" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.307260 4680 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d328f50-8892-45d4-8ba4-e4227f408d80" containerName="collect-profiles" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.307542 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d328f50-8892-45d4-8ba4-e4227f408d80" containerName="collect-profiles" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.307576 4680 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e43e74c-d885-4cfd-a8e0-55ed3f4f7918" containerName="registry-server" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.309384 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rmjq" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.316546 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5rmjq"] Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.390763 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nnhv\" (UniqueName: \"kubernetes.io/projected/7f818128-19b5-40c1-8b7e-7d9499581891-kube-api-access-5nnhv\") pod \"community-operators-5rmjq\" (UID: \"7f818128-19b5-40c1-8b7e-7d9499581891\") " pod="openshift-marketplace/community-operators-5rmjq" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.390915 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f818128-19b5-40c1-8b7e-7d9499581891-utilities\") pod \"community-operators-5rmjq\" (UID: \"7f818128-19b5-40c1-8b7e-7d9499581891\") " pod="openshift-marketplace/community-operators-5rmjq" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.391082 4680 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f818128-19b5-40c1-8b7e-7d9499581891-catalog-content\") pod \"community-operators-5rmjq\" (UID: \"7f818128-19b5-40c1-8b7e-7d9499581891\") " pod="openshift-marketplace/community-operators-5rmjq" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.492657 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nnhv\" (UniqueName: \"kubernetes.io/projected/7f818128-19b5-40c1-8b7e-7d9499581891-kube-api-access-5nnhv\") pod \"community-operators-5rmjq\" (UID: \"7f818128-19b5-40c1-8b7e-7d9499581891\") " pod="openshift-marketplace/community-operators-5rmjq" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.493004 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f818128-19b5-40c1-8b7e-7d9499581891-utilities\") pod \"community-operators-5rmjq\" (UID: \"7f818128-19b5-40c1-8b7e-7d9499581891\") " pod="openshift-marketplace/community-operators-5rmjq" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.493093 4680 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f818128-19b5-40c1-8b7e-7d9499581891-catalog-content\") pod \"community-operators-5rmjq\" (UID: \"7f818128-19b5-40c1-8b7e-7d9499581891\") " pod="openshift-marketplace/community-operators-5rmjq" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.493580 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f818128-19b5-40c1-8b7e-7d9499581891-utilities\") pod \"community-operators-5rmjq\" (UID: \"7f818128-19b5-40c1-8b7e-7d9499581891\") " pod="openshift-marketplace/community-operators-5rmjq" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.493633 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f818128-19b5-40c1-8b7e-7d9499581891-catalog-content\") pod \"community-operators-5rmjq\" (UID: \"7f818128-19b5-40c1-8b7e-7d9499581891\") " pod="openshift-marketplace/community-operators-5rmjq" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.510872 4680 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nnhv\" (UniqueName: \"kubernetes.io/projected/7f818128-19b5-40c1-8b7e-7d9499581891-kube-api-access-5nnhv\") pod \"community-operators-5rmjq\" (UID: \"7f818128-19b5-40c1-8b7e-7d9499581891\") " pod="openshift-marketplace/community-operators-5rmjq" Feb 17 19:15:47 crc kubenswrapper[4680]: I0217 19:15:47.652147 4680 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rmjq" Feb 17 19:15:48 crc kubenswrapper[4680]: I0217 19:15:48.237612 4680 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5rmjq"] Feb 17 19:15:48 crc kubenswrapper[4680]: W0217 19:15:48.254226 4680 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f818128_19b5_40c1_8b7e_7d9499581891.slice/crio-5213636334f679750037b6751e725e959e262bc281dc28480547b1c967a374b2 WatchSource:0}: Error finding container 5213636334f679750037b6751e725e959e262bc281dc28480547b1c967a374b2: Status 404 returned error can't find the container with id 5213636334f679750037b6751e725e959e262bc281dc28480547b1c967a374b2 Feb 17 19:15:48 crc kubenswrapper[4680]: I0217 19:15:48.659240 4680 generic.go:334] "Generic (PLEG): container finished" podID="7f818128-19b5-40c1-8b7e-7d9499581891" containerID="f92022a452921fa258fcde4efc75dafbb6ad4abe36ecb8cd71eabace5007681a" exitCode=0 Feb 17 19:15:48 crc kubenswrapper[4680]: I0217 19:15:48.659278 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rmjq" event={"ID":"7f818128-19b5-40c1-8b7e-7d9499581891","Type":"ContainerDied","Data":"f92022a452921fa258fcde4efc75dafbb6ad4abe36ecb8cd71eabace5007681a"} Feb 17 19:15:48 crc kubenswrapper[4680]: I0217 19:15:48.659300 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rmjq" event={"ID":"7f818128-19b5-40c1-8b7e-7d9499581891","Type":"ContainerStarted","Data":"5213636334f679750037b6751e725e959e262bc281dc28480547b1c967a374b2"} Feb 17 19:15:49 crc kubenswrapper[4680]: I0217 19:15:49.668486 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rmjq" event={"ID":"7f818128-19b5-40c1-8b7e-7d9499581891","Type":"ContainerStarted","Data":"1e7a1900beccb824cff1500bc9d4124a073a438af8138b06a4fe53de5d2bf150"} Feb 17 19:15:51 crc kubenswrapper[4680]: I0217 19:15:51.694692 4680 generic.go:334] "Generic (PLEG): container finished" podID="7f818128-19b5-40c1-8b7e-7d9499581891" containerID="1e7a1900beccb824cff1500bc9d4124a073a438af8138b06a4fe53de5d2bf150" exitCode=0 Feb 17 19:15:51 crc kubenswrapper[4680]: I0217 19:15:51.694781 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rmjq" event={"ID":"7f818128-19b5-40c1-8b7e-7d9499581891","Type":"ContainerDied","Data":"1e7a1900beccb824cff1500bc9d4124a073a438af8138b06a4fe53de5d2bf150"} Feb 17 19:15:52 crc kubenswrapper[4680]: I0217 19:15:52.705170 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rmjq" event={"ID":"7f818128-19b5-40c1-8b7e-7d9499581891","Type":"ContainerStarted","Data":"c81892b73bbad0ea77ca3edcc460eb0f96f40f5768918aa4bc54d20fa04749fc"} Feb 17 19:15:52 crc kubenswrapper[4680]: I0217 19:15:52.731241 4680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5rmjq" podStartSLOduration=2.223650339 podStartE2EDuration="5.731219436s" podCreationTimestamp="2026-02-17 19:15:47 +0000 UTC" firstStartedPulling="2026-02-17 19:15:48.663460081 +0000 UTC m=+5478.214358760" lastFinishedPulling="2026-02-17 19:15:52.171029178 +0000 UTC m=+5481.721927857" observedRunningTime="2026-02-17 19:15:52.722377669 +0000 UTC m=+5482.273276358" watchObservedRunningTime="2026-02-17 19:15:52.731219436 +0000 UTC m=+5482.282118125" Feb 17 19:15:57 crc kubenswrapper[4680]: I0217 19:15:57.652644 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5rmjq" Feb 17 19:15:57 crc kubenswrapper[4680]: I0217 19:15:57.653014 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5rmjq" Feb 17 19:15:57 crc kubenswrapper[4680]: I0217 19:15:57.653429 4680 scope.go:117] "RemoveContainer" containerID="06e4219645b0df2bb0b27095ac29410191b0c9c3fa31302c120c680a50b31a13" Feb 17 19:15:57 crc kubenswrapper[4680]: I0217 19:15:57.746625 4680 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5rmjq" Feb 17 19:15:57 crc kubenswrapper[4680]: I0217 19:15:57.818021 4680 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5rmjq" Feb 17 19:15:57 crc kubenswrapper[4680]: I0217 19:15:57.982106 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5rmjq"] Feb 17 19:15:59 crc kubenswrapper[4680]: I0217 19:15:59.790614 4680 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5rmjq" podUID="7f818128-19b5-40c1-8b7e-7d9499581891" containerName="registry-server" containerID="cri-o://c81892b73bbad0ea77ca3edcc460eb0f96f40f5768918aa4bc54d20fa04749fc" gracePeriod=2 Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.536286 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rmjq" Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.658183 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f818128-19b5-40c1-8b7e-7d9499581891-utilities\") pod \"7f818128-19b5-40c1-8b7e-7d9499581891\" (UID: \"7f818128-19b5-40c1-8b7e-7d9499581891\") " Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.658378 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f818128-19b5-40c1-8b7e-7d9499581891-catalog-content\") pod \"7f818128-19b5-40c1-8b7e-7d9499581891\" (UID: \"7f818128-19b5-40c1-8b7e-7d9499581891\") " Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.658431 4680 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nnhv\" (UniqueName: \"kubernetes.io/projected/7f818128-19b5-40c1-8b7e-7d9499581891-kube-api-access-5nnhv\") pod \"7f818128-19b5-40c1-8b7e-7d9499581891\" (UID: \"7f818128-19b5-40c1-8b7e-7d9499581891\") " Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.659651 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f818128-19b5-40c1-8b7e-7d9499581891-utilities" (OuterVolumeSpecName: "utilities") pod "7f818128-19b5-40c1-8b7e-7d9499581891" (UID: "7f818128-19b5-40c1-8b7e-7d9499581891"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.670497 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f818128-19b5-40c1-8b7e-7d9499581891-kube-api-access-5nnhv" (OuterVolumeSpecName: "kube-api-access-5nnhv") pod "7f818128-19b5-40c1-8b7e-7d9499581891" (UID: "7f818128-19b5-40c1-8b7e-7d9499581891"). InnerVolumeSpecName "kube-api-access-5nnhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.711855 4680 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f818128-19b5-40c1-8b7e-7d9499581891-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f818128-19b5-40c1-8b7e-7d9499581891" (UID: "7f818128-19b5-40c1-8b7e-7d9499581891"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.760495 4680 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nnhv\" (UniqueName: \"kubernetes.io/projected/7f818128-19b5-40c1-8b7e-7d9499581891-kube-api-access-5nnhv\") on node \"crc\" DevicePath \"\"" Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.760989 4680 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f818128-19b5-40c1-8b7e-7d9499581891-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.761046 4680 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f818128-19b5-40c1-8b7e-7d9499581891-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.802412 4680 generic.go:334] "Generic (PLEG): container finished" podID="7f818128-19b5-40c1-8b7e-7d9499581891" containerID="c81892b73bbad0ea77ca3edcc460eb0f96f40f5768918aa4bc54d20fa04749fc" exitCode=0 Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.802452 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rmjq" event={"ID":"7f818128-19b5-40c1-8b7e-7d9499581891","Type":"ContainerDied","Data":"c81892b73bbad0ea77ca3edcc460eb0f96f40f5768918aa4bc54d20fa04749fc"} Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.802480 4680 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rmjq" event={"ID":"7f818128-19b5-40c1-8b7e-7d9499581891","Type":"ContainerDied","Data":"5213636334f679750037b6751e725e959e262bc281dc28480547b1c967a374b2"} Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.802496 4680 scope.go:117] "RemoveContainer" containerID="c81892b73bbad0ea77ca3edcc460eb0f96f40f5768918aa4bc54d20fa04749fc" Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.804288 4680 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rmjq" Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.821564 4680 scope.go:117] "RemoveContainer" containerID="1e7a1900beccb824cff1500bc9d4124a073a438af8138b06a4fe53de5d2bf150" Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.842923 4680 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5rmjq"] Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.850570 4680 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5rmjq"] Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.852557 4680 scope.go:117] "RemoveContainer" containerID="f92022a452921fa258fcde4efc75dafbb6ad4abe36ecb8cd71eabace5007681a" Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.883633 4680 scope.go:117] "RemoveContainer" containerID="c81892b73bbad0ea77ca3edcc460eb0f96f40f5768918aa4bc54d20fa04749fc" Feb 17 19:16:00 crc kubenswrapper[4680]: E0217 19:16:00.884001 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c81892b73bbad0ea77ca3edcc460eb0f96f40f5768918aa4bc54d20fa04749fc\": container with ID starting with c81892b73bbad0ea77ca3edcc460eb0f96f40f5768918aa4bc54d20fa04749fc not found: ID does not exist" containerID="c81892b73bbad0ea77ca3edcc460eb0f96f40f5768918aa4bc54d20fa04749fc" Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.884102 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81892b73bbad0ea77ca3edcc460eb0f96f40f5768918aa4bc54d20fa04749fc"} err="failed to get container status \"c81892b73bbad0ea77ca3edcc460eb0f96f40f5768918aa4bc54d20fa04749fc\": rpc error: code = NotFound desc = could not find container \"c81892b73bbad0ea77ca3edcc460eb0f96f40f5768918aa4bc54d20fa04749fc\": container with ID starting with c81892b73bbad0ea77ca3edcc460eb0f96f40f5768918aa4bc54d20fa04749fc not found: ID does not exist" Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.884181 4680 scope.go:117] "RemoveContainer" containerID="1e7a1900beccb824cff1500bc9d4124a073a438af8138b06a4fe53de5d2bf150" Feb 17 19:16:00 crc kubenswrapper[4680]: E0217 19:16:00.884523 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e7a1900beccb824cff1500bc9d4124a073a438af8138b06a4fe53de5d2bf150\": container with ID starting with 1e7a1900beccb824cff1500bc9d4124a073a438af8138b06a4fe53de5d2bf150 not found: ID does not exist" containerID="1e7a1900beccb824cff1500bc9d4124a073a438af8138b06a4fe53de5d2bf150" Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.884562 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7a1900beccb824cff1500bc9d4124a073a438af8138b06a4fe53de5d2bf150"} err="failed to get container status \"1e7a1900beccb824cff1500bc9d4124a073a438af8138b06a4fe53de5d2bf150\": rpc error: code = NotFound desc = could not find container \"1e7a1900beccb824cff1500bc9d4124a073a438af8138b06a4fe53de5d2bf150\": container with ID starting with 1e7a1900beccb824cff1500bc9d4124a073a438af8138b06a4fe53de5d2bf150 not found: ID does not exist" Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.884609 4680 scope.go:117] "RemoveContainer" containerID="f92022a452921fa258fcde4efc75dafbb6ad4abe36ecb8cd71eabace5007681a" Feb 17 19:16:00 crc kubenswrapper[4680]: E0217 19:16:00.884907 4680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f92022a452921fa258fcde4efc75dafbb6ad4abe36ecb8cd71eabace5007681a\": container with ID starting with f92022a452921fa258fcde4efc75dafbb6ad4abe36ecb8cd71eabace5007681a not found: ID does not exist" containerID="f92022a452921fa258fcde4efc75dafbb6ad4abe36ecb8cd71eabace5007681a" Feb 17 19:16:00 crc kubenswrapper[4680]: I0217 19:16:00.884976 4680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f92022a452921fa258fcde4efc75dafbb6ad4abe36ecb8cd71eabace5007681a"} err="failed to get container status \"f92022a452921fa258fcde4efc75dafbb6ad4abe36ecb8cd71eabace5007681a\": rpc error: code = NotFound desc = could not find container \"f92022a452921fa258fcde4efc75dafbb6ad4abe36ecb8cd71eabace5007681a\": container with ID starting with f92022a452921fa258fcde4efc75dafbb6ad4abe36ecb8cd71eabace5007681a not found: ID does not exist" Feb 17 19:16:01 crc kubenswrapper[4680]: I0217 19:16:01.116217 4680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f818128-19b5-40c1-8b7e-7d9499581891" path="/var/lib/kubelet/pods/7f818128-19b5-40c1-8b7e-7d9499581891/volumes" Feb 17 19:16:05 crc kubenswrapper[4680]: I0217 19:16:05.588493 4680 patch_prober.go:28] interesting pod/machine-config-daemon-rfw56 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 19:16:05 crc kubenswrapper[4680]: I0217 19:16:05.589068 4680 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfw56" podUID="8dd08058-016e-4607-8be6-40463674c2fe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"